00:00:00.002 Started by upstream project "autotest-per-patch" build number 132578 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.114 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.115 The recommended git tool is: git 00:00:00.115 using credential 00000000-0000-0000-0000-000000000002 00:00:00.125 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.146 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.173 Using shallow fetch with depth 1 00:00:00.173 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.173 > git --version # timeout=10 00:00:00.200 > git --version # 'git version 2.39.2' 00:00:00.200 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.225 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.225 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.807 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.819 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.831 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.831 > git config core.sparsecheckout # timeout=10 00:00:04.843 > git read-tree -mu HEAD # timeout=10 00:00:04.859 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.891 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.891 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.974 [Pipeline] Start of Pipeline 00:00:04.987 [Pipeline] library 00:00:04.989 Loading library shm_lib@master 00:00:04.989 Library shm_lib@master is cached. Copying from home. 00:00:05.010 [Pipeline] node 00:00:05.025 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.028 [Pipeline] { 00:00:05.043 [Pipeline] catchError 00:00:05.045 [Pipeline] { 00:00:05.059 [Pipeline] wrap 00:00:05.069 [Pipeline] { 00:00:05.075 [Pipeline] stage 00:00:05.077 [Pipeline] { (Prologue) 00:00:05.272 [Pipeline] sh 00:00:05.557 + logger -p user.info -t JENKINS-CI 00:00:05.574 [Pipeline] echo 00:00:05.576 Node: WFP21 00:00:05.585 [Pipeline] sh 00:00:05.889 [Pipeline] setCustomBuildProperty 00:00:05.897 [Pipeline] echo 00:00:05.898 Cleanup processes 00:00:05.901 [Pipeline] sh 00:00:06.182 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.182 3875643 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.196 [Pipeline] sh 00:00:06.484 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.484 ++ grep -v 'sudo pgrep' 00:00:06.484 ++ awk '{print $1}' 00:00:06.484 + sudo kill -9 00:00:06.484 + true 00:00:06.497 [Pipeline] cleanWs 00:00:06.506 [WS-CLEANUP] Deleting project workspace... 00:00:06.506 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.512 [WS-CLEANUP] done 00:00:06.516 [Pipeline] setCustomBuildProperty 00:00:06.528 [Pipeline] sh 00:00:06.808 + sudo git config --global --replace-all safe.directory '*' 00:00:06.878 [Pipeline] httpRequest 00:00:08.150 [Pipeline] echo 00:00:08.151 Sorcerer 10.211.164.101 is alive 00:00:08.159 [Pipeline] retry 00:00:08.160 [Pipeline] { 00:00:08.172 [Pipeline] httpRequest 00:00:08.177 HttpMethod: GET 00:00:08.177 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.178 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.181 Response Code: HTTP/1.1 200 OK 00:00:08.181 Success: Status code 200 is in the accepted range: 200,404 00:00:08.182 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.409 [Pipeline] } 00:00:09.426 [Pipeline] // retry 00:00:09.434 [Pipeline] sh 00:00:09.720 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.736 [Pipeline] httpRequest 00:00:11.128 [Pipeline] echo 00:00:11.130 Sorcerer 10.211.164.101 is alive 00:00:11.141 [Pipeline] retry 00:00:11.143 [Pipeline] { 00:00:11.158 [Pipeline] httpRequest 00:00:11.163 HttpMethod: GET 00:00:11.164 URL: http://10.211.164.101/packages/spdk_24f0cb4c3f83c5e3773ceac60a95836862784b97.tar.gz 00:00:11.164 Sending request to url: http://10.211.164.101/packages/spdk_24f0cb4c3f83c5e3773ceac60a95836862784b97.tar.gz 00:00:11.178 Response Code: HTTP/1.1 200 OK 00:00:11.178 Success: Status code 200 is in the accepted range: 200,404 00:00:11.179 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_24f0cb4c3f83c5e3773ceac60a95836862784b97.tar.gz 00:01:04.112 [Pipeline] } 00:01:04.129 [Pipeline] // retry 00:01:04.137 [Pipeline] sh 00:01:04.420 + tar --no-same-owner -xf spdk_24f0cb4c3f83c5e3773ceac60a95836862784b97.tar.gz 00:01:06.964 [Pipeline] sh 00:01:07.246 + git -C spdk log --oneline -n5 00:01:07.246 24f0cb4c3 test/common: Make sure get_zoned_devs() picks all namespaces 00:01:07.246 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:01:07.246 5592070b3 doc: update nvmf_tracing.md 00:01:07.246 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:01:07.246 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:01:07.256 [Pipeline] } 00:01:07.269 [Pipeline] // stage 00:01:07.277 [Pipeline] stage 00:01:07.279 [Pipeline] { (Prepare) 00:01:07.295 [Pipeline] writeFile 00:01:07.310 [Pipeline] sh 00:01:07.594 + logger -p user.info -t JENKINS-CI 00:01:07.605 [Pipeline] sh 00:01:07.884 + logger -p user.info -t JENKINS-CI 00:01:07.895 [Pipeline] sh 00:01:08.174 + cat autorun-spdk.conf 00:01:08.174 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.174 SPDK_TEST_NVMF=1 00:01:08.174 SPDK_TEST_NVME_CLI=1 00:01:08.174 SPDK_TEST_NVMF_NICS=mlx5 00:01:08.174 SPDK_RUN_UBSAN=1 00:01:08.174 NET_TYPE=phy 00:01:08.182 RUN_NIGHTLY=0 00:01:08.186 [Pipeline] readFile 00:01:08.206 [Pipeline] withEnv 00:01:08.208 [Pipeline] { 00:01:08.218 [Pipeline] sh 00:01:08.500 + set -ex 00:01:08.500 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:08.500 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:08.500 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.500 ++ SPDK_TEST_NVMF=1 00:01:08.500 ++ SPDK_TEST_NVME_CLI=1 00:01:08.500 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:08.500 ++ SPDK_RUN_UBSAN=1 00:01:08.500 ++ NET_TYPE=phy 00:01:08.500 ++ RUN_NIGHTLY=0 00:01:08.500 + case $SPDK_TEST_NVMF_NICS in 00:01:08.500 + DRIVERS=mlx5_ib 00:01:08.500 + [[ -n mlx5_ib ]] 00:01:08.500 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:08.500 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:15.103 rmmod: ERROR: Module irdma is not currently loaded 00:01:15.103 rmmod: ERROR: Module i40iw is not currently loaded 00:01:15.103 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:15.103 + true 00:01:15.103 + for D in $DRIVERS 00:01:15.103 + sudo modprobe mlx5_ib 00:01:15.103 + exit 0 00:01:15.113 [Pipeline] } 00:01:15.127 [Pipeline] // withEnv 00:01:15.131 [Pipeline] } 00:01:15.143 [Pipeline] // stage 00:01:15.152 [Pipeline] catchError 00:01:15.153 [Pipeline] { 00:01:15.166 [Pipeline] timeout 00:01:15.166 Timeout set to expire in 1 hr 0 min 00:01:15.168 [Pipeline] { 00:01:15.180 [Pipeline] stage 00:01:15.182 [Pipeline] { (Tests) 00:01:15.196 [Pipeline] sh 00:01:15.479 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:15.479 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:15.479 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:15.479 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:15.479 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:15.479 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:15.479 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:15.479 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:15.479 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:15.479 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:15.479 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:15.479 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:15.479 + source /etc/os-release 00:01:15.479 ++ NAME='Fedora Linux' 00:01:15.479 ++ VERSION='39 (Cloud Edition)' 00:01:15.479 ++ ID=fedora 00:01:15.479 ++ VERSION_ID=39 00:01:15.479 ++ VERSION_CODENAME= 00:01:15.479 ++ PLATFORM_ID=platform:f39 00:01:15.479 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:15.479 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:15.479 ++ LOGO=fedora-logo-icon 00:01:15.479 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:15.479 ++ HOME_URL=https://fedoraproject.org/ 00:01:15.479 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:15.479 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:15.479 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:15.479 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:15.479 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:15.479 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:15.479 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:15.479 ++ SUPPORT_END=2024-11-12 00:01:15.479 ++ VARIANT='Cloud Edition' 00:01:15.479 ++ VARIANT_ID=cloud 00:01:15.479 + uname -a 00:01:15.479 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:15.479 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:19.674 Hugepages 00:01:19.674 node hugesize free / total 00:01:19.674 node0 1048576kB 0 / 0 00:01:19.674 node0 2048kB 0 / 0 00:01:19.674 node1 1048576kB 0 / 0 00:01:19.674 node1 2048kB 0 / 0 00:01:19.674 00:01:19.674 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:19.674 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:19.674 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:19.674 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:19.674 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:19.674 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:19.674 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:19.674 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:19.674 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:19.674 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:19.674 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:19.674 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:19.674 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:19.674 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:19.674 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:19.674 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:19.674 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:19.674 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:19.674 + rm -f /tmp/spdk-ld-path 00:01:19.674 + source autorun-spdk.conf 00:01:19.674 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.674 ++ SPDK_TEST_NVMF=1 00:01:19.674 ++ SPDK_TEST_NVME_CLI=1 00:01:19.674 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:19.674 ++ SPDK_RUN_UBSAN=1 00:01:19.674 ++ NET_TYPE=phy 00:01:19.674 ++ RUN_NIGHTLY=0 00:01:19.674 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:19.674 + [[ -n '' ]] 00:01:19.674 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:19.674 + for M in /var/spdk/build-*-manifest.txt 00:01:19.674 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:19.674 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:19.674 + for M in /var/spdk/build-*-manifest.txt 00:01:19.674 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:19.674 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:19.674 + for M in /var/spdk/build-*-manifest.txt 00:01:19.674 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:19.674 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:19.674 ++ uname 00:01:19.674 + [[ Linux == \L\i\n\u\x ]] 00:01:19.674 + sudo dmesg -T 00:01:19.674 + sudo dmesg --clear 00:01:19.674 + dmesg_pid=3877277 00:01:19.674 + sudo dmesg -Tw 00:01:19.674 + [[ Fedora Linux == FreeBSD ]] 00:01:19.674 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.674 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.674 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:19.674 + [[ -x /usr/src/fio-static/fio ]] 00:01:19.674 + export FIO_BIN=/usr/src/fio-static/fio 00:01:19.674 + FIO_BIN=/usr/src/fio-static/fio 00:01:19.674 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:19.674 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:19.674 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:19.674 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.674 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.674 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:19.674 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.674 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.674 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:19.674 12:39:45 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:19.674 12:39:45 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:19.674 12:39:45 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.674 12:39:45 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:19.674 12:39:45 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:19.674 12:39:45 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:01:19.674 12:39:45 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:19.674 12:39:45 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ NET_TYPE=phy 00:01:19.674 12:39:45 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ RUN_NIGHTLY=0 00:01:19.675 12:39:45 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:19.675 12:39:45 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:19.675 12:39:45 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:19.675 12:39:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:19.675 12:39:45 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:19.675 12:39:45 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:19.675 12:39:45 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:19.675 12:39:45 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:19.675 12:39:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.675 12:39:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.675 12:39:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.675 12:39:45 -- paths/export.sh@5 -- $ export PATH 00:01:19.675 12:39:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.675 12:39:45 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:19.675 12:39:45 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:19.675 12:39:45 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732707585.XXXXXX 00:01:19.675 12:39:45 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732707585.XFcDXh 00:01:19.675 12:39:45 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:19.675 12:39:45 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:19.675 12:39:45 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:19.675 12:39:45 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:19.675 12:39:45 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:19.675 12:39:45 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:19.675 12:39:45 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:19.675 12:39:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.675 12:39:45 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:19.675 12:39:45 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:19.675 12:39:45 -- pm/common@17 -- $ local monitor 00:01:19.675 12:39:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.675 12:39:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.675 12:39:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.675 12:39:45 -- pm/common@21 -- $ date +%s 00:01:19.675 12:39:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.675 12:39:45 -- pm/common@21 -- $ date +%s 00:01:19.675 12:39:45 -- pm/common@25 -- $ sleep 1 00:01:19.675 12:39:45 -- pm/common@21 -- $ date +%s 00:01:19.675 12:39:45 -- pm/common@21 -- $ date +%s 00:01:19.675 12:39:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732707585 00:01:19.675 12:39:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732707585 00:01:19.675 12:39:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732707585 00:01:19.675 12:39:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732707585 00:01:19.675 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732707585_collect-cpu-load.pm.log 00:01:19.675 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732707585_collect-vmstat.pm.log 00:01:19.675 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732707585_collect-cpu-temp.pm.log 00:01:19.675 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732707585_collect-bmc-pm.bmc.pm.log 00:01:20.610 12:39:46 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:20.610 12:39:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:20.610 12:39:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:20.610 12:39:46 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:20.610 12:39:46 -- spdk/autobuild.sh@16 -- $ date -u 00:01:20.610 Wed Nov 27 11:39:46 AM UTC 2024 00:01:20.610 12:39:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:20.610 v25.01-pre-272-g24f0cb4c3 00:01:20.610 12:39:46 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:20.610 12:39:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:20.610 12:39:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:20.610 12:39:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:20.610 12:39:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:20.610 12:39:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.610 ************************************ 00:01:20.610 START TEST ubsan 00:01:20.610 ************************************ 00:01:20.610 12:39:46 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:20.610 using ubsan 00:01:20.610 00:01:20.610 real 0m0.001s 00:01:20.610 user 0m0.000s 00:01:20.610 sys 0m0.000s 00:01:20.610 12:39:46 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:20.610 12:39:46 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.610 ************************************ 00:01:20.610 END TEST ubsan 00:01:20.610 ************************************ 00:01:20.610 12:39:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:20.610 12:39:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:20.610 12:39:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:20.610 12:39:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:20.610 12:39:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:20.610 12:39:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:20.610 12:39:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:20.610 12:39:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:20.610 12:39:46 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:20.610 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:20.610 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:21.177 Using 'verbs' RDMA provider 00:01:33.950 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:48.834 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:48.834 Creating mk/config.mk...done. 00:01:48.834 Creating mk/cc.flags.mk...done. 00:01:48.834 Type 'make' to build. 00:01:48.834 12:40:14 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:01:48.834 12:40:14 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:48.834 12:40:14 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:48.834 12:40:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.834 ************************************ 00:01:48.834 START TEST make 00:01:48.834 ************************************ 00:01:48.834 12:40:14 make -- common/autotest_common.sh@1129 -- $ make -j112 00:01:48.834 make[1]: Nothing to be done for 'all'. 00:01:56.999 The Meson build system 00:01:56.999 Version: 1.5.0 00:01:56.999 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:56.999 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:56.999 Build type: native build 00:01:56.999 Program cat found: YES (/usr/bin/cat) 00:01:56.999 Project name: DPDK 00:01:56.999 Project version: 24.03.0 00:01:56.999 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:56.999 C linker for the host machine: cc ld.bfd 2.40-14 00:01:56.999 Host machine cpu family: x86_64 00:01:56.999 Host machine cpu: x86_64 00:01:56.999 Message: ## Building in Developer Mode ## 00:01:56.999 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:56.999 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:56.999 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:56.999 Program python3 found: YES (/usr/bin/python3) 00:01:56.999 Program cat found: YES (/usr/bin/cat) 00:01:56.999 Compiler for C supports arguments -march=native: YES 00:01:56.999 Checking for size of "void *" : 8 00:01:56.999 Checking for size of "void *" : 8 (cached) 00:01:56.999 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:56.999 Library m found: YES 00:01:56.999 Library numa found: YES 00:01:56.999 Has header "numaif.h" : YES 00:01:56.999 Library fdt found: NO 00:01:56.999 Library execinfo found: NO 00:01:56.999 Has header "execinfo.h" : YES 00:01:56.999 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:56.999 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:56.999 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:56.999 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:56.999 Run-time dependency openssl found: YES 3.1.1 00:01:56.999 Run-time dependency libpcap found: YES 1.10.4 00:01:56.999 Has header "pcap.h" with dependency libpcap: YES 00:01:56.999 Compiler for C supports arguments -Wcast-qual: YES 00:01:56.999 Compiler for C supports arguments -Wdeprecated: YES 00:01:56.999 Compiler for C supports arguments -Wformat: YES 00:01:56.999 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:56.999 Compiler for C supports arguments -Wformat-security: NO 00:01:56.999 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:56.999 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:56.999 Compiler for C supports arguments -Wnested-externs: YES 00:01:56.999 Compiler for C supports arguments -Wold-style-definition: YES 00:01:56.999 Compiler for C supports arguments -Wpointer-arith: YES 00:01:56.999 Compiler for C supports arguments -Wsign-compare: YES 00:01:56.999 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:56.999 Compiler for C supports arguments -Wundef: YES 00:01:56.999 Compiler for C supports arguments -Wwrite-strings: YES 00:01:56.999 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:56.999 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:56.999 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:56.999 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:56.999 Program objdump found: YES (/usr/bin/objdump) 00:01:56.999 Compiler for C supports arguments -mavx512f: YES 00:01:56.999 Checking if "AVX512 checking" compiles: YES 00:01:56.999 Fetching value of define "__SSE4_2__" : 1 00:01:56.999 Fetching value of define "__AES__" : 1 00:01:56.999 Fetching value of define "__AVX__" : 1 00:01:56.999 Fetching value of define "__AVX2__" : 1 00:01:56.999 Fetching value of define "__AVX512BW__" : 1 00:01:56.999 Fetching value of define "__AVX512CD__" : 1 00:01:56.999 Fetching value of define "__AVX512DQ__" : 1 00:01:56.999 Fetching value of define "__AVX512F__" : 1 00:01:56.999 Fetching value of define "__AVX512VL__" : 1 00:01:56.999 Fetching value of define "__PCLMUL__" : 1 00:01:56.999 Fetching value of define "__RDRND__" : 1 00:01:56.999 Fetching value of define "__RDSEED__" : 1 00:01:56.999 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:56.999 Fetching value of define "__znver1__" : (undefined) 00:01:56.999 Fetching value of define "__znver2__" : (undefined) 00:01:56.999 Fetching value of define "__znver3__" : (undefined) 00:01:56.999 Fetching value of define "__znver4__" : (undefined) 00:01:56.999 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:56.999 Message: lib/log: Defining dependency "log" 00:01:56.999 Message: lib/kvargs: Defining dependency "kvargs" 00:01:56.999 Message: lib/telemetry: Defining dependency "telemetry" 00:01:56.999 Checking for function "getentropy" : NO 00:01:56.999 Message: lib/eal: Defining dependency "eal" 00:01:56.999 Message: lib/ring: Defining dependency "ring" 00:01:56.999 Message: lib/rcu: Defining dependency "rcu" 00:01:56.999 Message: lib/mempool: Defining dependency "mempool" 00:01:56.999 Message: lib/mbuf: Defining dependency "mbuf" 00:01:57.000 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:57.000 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:57.000 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:57.000 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:57.000 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:57.000 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:57.000 Compiler for C supports arguments -mpclmul: YES 00:01:57.000 Compiler for C supports arguments -maes: YES 00:01:57.000 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.000 Compiler for C supports arguments -mavx512bw: YES 00:01:57.000 Compiler for C supports arguments -mavx512dq: YES 00:01:57.000 Compiler for C supports arguments -mavx512vl: YES 00:01:57.000 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:57.000 Compiler for C supports arguments -mavx2: YES 00:01:57.000 Compiler for C supports arguments -mavx: YES 00:01:57.000 Message: lib/net: Defining dependency "net" 00:01:57.000 Message: lib/meter: Defining dependency "meter" 00:01:57.000 Message: lib/ethdev: Defining dependency "ethdev" 00:01:57.000 Message: lib/pci: Defining dependency "pci" 00:01:57.000 Message: lib/cmdline: Defining dependency "cmdline" 00:01:57.000 Message: lib/hash: Defining dependency "hash" 00:01:57.000 Message: lib/timer: Defining dependency "timer" 00:01:57.000 Message: lib/compressdev: Defining dependency "compressdev" 00:01:57.000 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:57.000 Message: lib/dmadev: Defining dependency "dmadev" 00:01:57.000 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:57.000 Message: lib/power: Defining dependency "power" 00:01:57.000 Message: lib/reorder: Defining dependency "reorder" 00:01:57.000 Message: lib/security: Defining dependency "security" 00:01:57.000 Has header "linux/userfaultfd.h" : YES 00:01:57.000 Has header "linux/vduse.h" : YES 00:01:57.000 Message: lib/vhost: Defining dependency "vhost" 00:01:57.000 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:57.000 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:57.000 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:57.000 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:57.000 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:57.000 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:57.000 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:57.000 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:57.000 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:57.000 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:57.000 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:57.000 Configuring doxy-api-html.conf using configuration 00:01:57.000 Configuring doxy-api-man.conf using configuration 00:01:57.000 Program mandb found: YES (/usr/bin/mandb) 00:01:57.000 Program sphinx-build found: NO 00:01:57.000 Configuring rte_build_config.h using configuration 00:01:57.000 Message: 00:01:57.000 ================= 00:01:57.000 Applications Enabled 00:01:57.000 ================= 00:01:57.000 00:01:57.000 apps: 00:01:57.000 00:01:57.000 00:01:57.000 Message: 00:01:57.000 ================= 00:01:57.000 Libraries Enabled 00:01:57.000 ================= 00:01:57.000 00:01:57.000 libs: 00:01:57.000 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:57.000 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:57.000 cryptodev, dmadev, power, reorder, security, vhost, 00:01:57.000 00:01:57.000 Message: 00:01:57.000 =============== 00:01:57.000 Drivers Enabled 00:01:57.000 =============== 00:01:57.000 00:01:57.000 common: 00:01:57.000 00:01:57.000 bus: 00:01:57.000 pci, vdev, 00:01:57.000 mempool: 00:01:57.000 ring, 00:01:57.000 dma: 00:01:57.000 00:01:57.000 net: 00:01:57.000 00:01:57.000 crypto: 00:01:57.000 00:01:57.000 compress: 00:01:57.000 00:01:57.000 vdpa: 00:01:57.000 00:01:57.000 00:01:57.000 Message: 00:01:57.000 ================= 00:01:57.000 Content Skipped 00:01:57.000 ================= 00:01:57.000 00:01:57.000 apps: 00:01:57.000 dumpcap: explicitly disabled via build config 00:01:57.000 graph: explicitly disabled via build config 00:01:57.000 pdump: explicitly disabled via build config 00:01:57.000 proc-info: explicitly disabled via build config 00:01:57.000 test-acl: explicitly disabled via build config 00:01:57.000 test-bbdev: explicitly disabled via build config 00:01:57.000 test-cmdline: explicitly disabled via build config 00:01:57.000 test-compress-perf: explicitly disabled via build config 00:01:57.000 test-crypto-perf: explicitly disabled via build config 00:01:57.000 test-dma-perf: explicitly disabled via build config 00:01:57.000 test-eventdev: explicitly disabled via build config 00:01:57.000 test-fib: explicitly disabled via build config 00:01:57.000 test-flow-perf: explicitly disabled via build config 00:01:57.000 test-gpudev: explicitly disabled via build config 00:01:57.000 test-mldev: explicitly disabled via build config 00:01:57.000 test-pipeline: explicitly disabled via build config 00:01:57.000 test-pmd: explicitly disabled via build config 00:01:57.000 test-regex: explicitly disabled via build config 00:01:57.000 test-sad: explicitly disabled via build config 00:01:57.000 test-security-perf: explicitly disabled via build config 00:01:57.000 00:01:57.000 libs: 00:01:57.000 argparse: explicitly disabled via build config 00:01:57.000 metrics: explicitly disabled via build config 00:01:57.000 acl: explicitly disabled via build config 00:01:57.000 bbdev: explicitly disabled via build config 00:01:57.000 bitratestats: explicitly disabled via build config 00:01:57.000 bpf: explicitly disabled via build config 00:01:57.000 cfgfile: explicitly disabled via build config 00:01:57.000 distributor: explicitly disabled via build config 00:01:57.000 efd: explicitly disabled via build config 00:01:57.000 eventdev: explicitly disabled via build config 00:01:57.000 dispatcher: explicitly disabled via build config 00:01:57.000 gpudev: explicitly disabled via build config 00:01:57.000 gro: explicitly disabled via build config 00:01:57.000 gso: explicitly disabled via build config 00:01:57.000 ip_frag: explicitly disabled via build config 00:01:57.000 jobstats: explicitly disabled via build config 00:01:57.000 latencystats: explicitly disabled via build config 00:01:57.000 lpm: explicitly disabled via build config 00:01:57.000 member: explicitly disabled via build config 00:01:57.000 pcapng: explicitly disabled via build config 00:01:57.000 rawdev: explicitly disabled via build config 00:01:57.000 regexdev: explicitly disabled via build config 00:01:57.000 mldev: explicitly disabled via build config 00:01:57.000 rib: explicitly disabled via build config 00:01:57.000 sched: explicitly disabled via build config 00:01:57.000 stack: explicitly disabled via build config 00:01:57.000 ipsec: explicitly disabled via build config 00:01:57.000 pdcp: explicitly disabled via build config 00:01:57.000 fib: explicitly disabled via build config 00:01:57.000 port: explicitly disabled via build config 00:01:57.000 pdump: explicitly disabled via build config 00:01:57.000 table: explicitly disabled via build config 00:01:57.000 pipeline: explicitly disabled via build config 00:01:57.000 graph: explicitly disabled via build config 00:01:57.000 node: explicitly disabled via build config 00:01:57.000 00:01:57.000 drivers: 00:01:57.000 common/cpt: not in enabled drivers build config 00:01:57.000 common/dpaax: not in enabled drivers build config 00:01:57.000 common/iavf: not in enabled drivers build config 00:01:57.000 common/idpf: not in enabled drivers build config 00:01:57.000 common/ionic: not in enabled drivers build config 00:01:57.000 common/mvep: not in enabled drivers build config 00:01:57.000 common/octeontx: not in enabled drivers build config 00:01:57.000 bus/auxiliary: not in enabled drivers build config 00:01:57.000 bus/cdx: not in enabled drivers build config 00:01:57.000 bus/dpaa: not in enabled drivers build config 00:01:57.000 bus/fslmc: not in enabled drivers build config 00:01:57.000 bus/ifpga: not in enabled drivers build config 00:01:57.000 bus/platform: not in enabled drivers build config 00:01:57.000 bus/uacce: not in enabled drivers build config 00:01:57.000 bus/vmbus: not in enabled drivers build config 00:01:57.000 common/cnxk: not in enabled drivers build config 00:01:57.000 common/mlx5: not in enabled drivers build config 00:01:57.000 common/nfp: not in enabled drivers build config 00:01:57.000 common/nitrox: not in enabled drivers build config 00:01:57.000 common/qat: not in enabled drivers build config 00:01:57.000 common/sfc_efx: not in enabled drivers build config 00:01:57.000 mempool/bucket: not in enabled drivers build config 00:01:57.000 mempool/cnxk: not in enabled drivers build config 00:01:57.000 mempool/dpaa: not in enabled drivers build config 00:01:57.000 mempool/dpaa2: not in enabled drivers build config 00:01:57.000 mempool/octeontx: not in enabled drivers build config 00:01:57.000 mempool/stack: not in enabled drivers build config 00:01:57.000 dma/cnxk: not in enabled drivers build config 00:01:57.000 dma/dpaa: not in enabled drivers build config 00:01:57.000 dma/dpaa2: not in enabled drivers build config 00:01:57.000 dma/hisilicon: not in enabled drivers build config 00:01:57.000 dma/idxd: not in enabled drivers build config 00:01:57.000 dma/ioat: not in enabled drivers build config 00:01:57.000 dma/skeleton: not in enabled drivers build config 00:01:57.000 net/af_packet: not in enabled drivers build config 00:01:57.000 net/af_xdp: not in enabled drivers build config 00:01:57.000 net/ark: not in enabled drivers build config 00:01:57.000 net/atlantic: not in enabled drivers build config 00:01:57.000 net/avp: not in enabled drivers build config 00:01:57.000 net/axgbe: not in enabled drivers build config 00:01:57.000 net/bnx2x: not in enabled drivers build config 00:01:57.000 net/bnxt: not in enabled drivers build config 00:01:57.000 net/bonding: not in enabled drivers build config 00:01:57.000 net/cnxk: not in enabled drivers build config 00:01:57.000 net/cpfl: not in enabled drivers build config 00:01:57.000 net/cxgbe: not in enabled drivers build config 00:01:57.000 net/dpaa: not in enabled drivers build config 00:01:57.001 net/dpaa2: not in enabled drivers build config 00:01:57.001 net/e1000: not in enabled drivers build config 00:01:57.001 net/ena: not in enabled drivers build config 00:01:57.001 net/enetc: not in enabled drivers build config 00:01:57.001 net/enetfec: not in enabled drivers build config 00:01:57.001 net/enic: not in enabled drivers build config 00:01:57.001 net/failsafe: not in enabled drivers build config 00:01:57.001 net/fm10k: not in enabled drivers build config 00:01:57.001 net/gve: not in enabled drivers build config 00:01:57.001 net/hinic: not in enabled drivers build config 00:01:57.001 net/hns3: not in enabled drivers build config 00:01:57.001 net/i40e: not in enabled drivers build config 00:01:57.001 net/iavf: not in enabled drivers build config 00:01:57.001 net/ice: not in enabled drivers build config 00:01:57.001 net/idpf: not in enabled drivers build config 00:01:57.001 net/igc: not in enabled drivers build config 00:01:57.001 net/ionic: not in enabled drivers build config 00:01:57.001 net/ipn3ke: not in enabled drivers build config 00:01:57.001 net/ixgbe: not in enabled drivers build config 00:01:57.001 net/mana: not in enabled drivers build config 00:01:57.001 net/memif: not in enabled drivers build config 00:01:57.001 net/mlx4: not in enabled drivers build config 00:01:57.001 net/mlx5: not in enabled drivers build config 00:01:57.001 net/mvneta: not in enabled drivers build config 00:01:57.001 net/mvpp2: not in enabled drivers build config 00:01:57.001 net/netvsc: not in enabled drivers build config 00:01:57.001 net/nfb: not in enabled drivers build config 00:01:57.001 net/nfp: not in enabled drivers build config 00:01:57.001 net/ngbe: not in enabled drivers build config 00:01:57.001 net/null: not in enabled drivers build config 00:01:57.001 net/octeontx: not in enabled drivers build config 00:01:57.001 net/octeon_ep: not in enabled drivers build config 00:01:57.001 net/pcap: not in enabled drivers build config 00:01:57.001 net/pfe: not in enabled drivers build config 00:01:57.001 net/qede: not in enabled drivers build config 00:01:57.001 net/ring: not in enabled drivers build config 00:01:57.001 net/sfc: not in enabled drivers build config 00:01:57.001 net/softnic: not in enabled drivers build config 00:01:57.001 net/tap: not in enabled drivers build config 00:01:57.001 net/thunderx: not in enabled drivers build config 00:01:57.001 net/txgbe: not in enabled drivers build config 00:01:57.001 net/vdev_netvsc: not in enabled drivers build config 00:01:57.001 net/vhost: not in enabled drivers build config 00:01:57.001 net/virtio: not in enabled drivers build config 00:01:57.001 net/vmxnet3: not in enabled drivers build config 00:01:57.001 raw/*: missing internal dependency, "rawdev" 00:01:57.001 crypto/armv8: not in enabled drivers build config 00:01:57.001 crypto/bcmfs: not in enabled drivers build config 00:01:57.001 crypto/caam_jr: not in enabled drivers build config 00:01:57.001 crypto/ccp: not in enabled drivers build config 00:01:57.001 crypto/cnxk: not in enabled drivers build config 00:01:57.001 crypto/dpaa_sec: not in enabled drivers build config 00:01:57.001 crypto/dpaa2_sec: not in enabled drivers build config 00:01:57.001 crypto/ipsec_mb: not in enabled drivers build config 00:01:57.001 crypto/mlx5: not in enabled drivers build config 00:01:57.001 crypto/mvsam: not in enabled drivers build config 00:01:57.001 crypto/nitrox: not in enabled drivers build config 00:01:57.001 crypto/null: not in enabled drivers build config 00:01:57.001 crypto/octeontx: not in enabled drivers build config 00:01:57.001 crypto/openssl: not in enabled drivers build config 00:01:57.001 crypto/scheduler: not in enabled drivers build config 00:01:57.001 crypto/uadk: not in enabled drivers build config 00:01:57.001 crypto/virtio: not in enabled drivers build config 00:01:57.001 compress/isal: not in enabled drivers build config 00:01:57.001 compress/mlx5: not in enabled drivers build config 00:01:57.001 compress/nitrox: not in enabled drivers build config 00:01:57.001 compress/octeontx: not in enabled drivers build config 00:01:57.001 compress/zlib: not in enabled drivers build config 00:01:57.001 regex/*: missing internal dependency, "regexdev" 00:01:57.001 ml/*: missing internal dependency, "mldev" 00:01:57.001 vdpa/ifc: not in enabled drivers build config 00:01:57.001 vdpa/mlx5: not in enabled drivers build config 00:01:57.001 vdpa/nfp: not in enabled drivers build config 00:01:57.001 vdpa/sfc: not in enabled drivers build config 00:01:57.001 event/*: missing internal dependency, "eventdev" 00:01:57.001 baseband/*: missing internal dependency, "bbdev" 00:01:57.001 gpu/*: missing internal dependency, "gpudev" 00:01:57.001 00:01:57.001 00:01:57.001 Build targets in project: 85 00:01:57.001 00:01:57.001 DPDK 24.03.0 00:01:57.001 00:01:57.001 User defined options 00:01:57.001 buildtype : debug 00:01:57.001 default_library : shared 00:01:57.001 libdir : lib 00:01:57.001 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:57.001 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:57.001 c_link_args : 00:01:57.001 cpu_instruction_set: native 00:01:57.001 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:57.001 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:57.001 enable_docs : false 00:01:57.001 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:57.001 enable_kmods : false 00:01:57.001 max_lcores : 128 00:01:57.001 tests : false 00:01:57.001 00:01:57.001 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.277 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:57.277 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:57.543 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:57.543 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:57.543 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:57.543 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:57.543 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:57.543 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:57.543 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:57.543 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:57.543 [10/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:57.543 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:57.543 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:57.543 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:57.543 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:57.543 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:57.543 [16/268] Linking static target lib/librte_kvargs.a 00:01:57.543 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:57.543 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:57.543 [19/268] Linking static target lib/librte_log.a 00:01:57.543 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:57.543 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:57.543 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:57.543 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:57.543 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:57.801 [25/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:57.801 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:57.801 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:57.801 [28/268] Linking static target lib/librte_pci.a 00:01:57.801 [29/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:57.801 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:57.801 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:57.801 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:57.801 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:57.801 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:57.801 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:58.062 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:58.062 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:58.062 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:58.062 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:58.062 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:58.062 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:58.062 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:58.062 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:58.062 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:58.062 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:58.062 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:58.062 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:58.062 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:58.062 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:58.062 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:58.062 [51/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:58.062 [52/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:58.062 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:58.062 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:58.062 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:58.062 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:58.062 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:58.062 [58/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:58.062 [59/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.062 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:58.062 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:58.062 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:58.062 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:58.062 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:58.062 [65/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:58.062 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:58.062 [67/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:58.062 [68/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:58.062 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:58.062 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:58.062 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:58.062 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:58.062 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:58.062 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:58.062 [75/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:58.062 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:58.062 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:58.062 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:58.062 [79/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:58.062 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:58.062 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:58.062 [82/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:58.062 [83/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:58.062 [84/268] Linking static target lib/librte_ring.a 00:01:58.062 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:58.062 [86/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:58.062 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:58.062 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:58.062 [89/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:58.062 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:58.062 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:58.062 [92/268] Linking static target lib/librte_meter.a 00:01:58.062 [93/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:58.062 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:58.062 [95/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:58.062 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:58.062 [97/268] Linking static target lib/librte_telemetry.a 00:01:58.062 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:58.062 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:58.062 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:58.062 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:58.062 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:58.062 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:58.062 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:58.062 [105/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.062 [106/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.062 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:58.062 [108/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:58.062 [109/268] Linking static target lib/librte_cmdline.a 00:01:58.062 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:58.062 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:58.062 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:58.062 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:58.062 [114/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:58.062 [115/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:58.062 [116/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:58.062 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:58.062 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:58.062 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:58.062 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:58.062 [121/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:58.062 [122/268] Linking static target lib/librte_rcu.a 00:01:58.062 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:58.062 [124/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:58.062 [125/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:58.321 [126/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:58.321 [127/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:58.321 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:58.321 [129/268] Linking static target lib/librte_mempool.a 00:01:58.321 [130/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:58.321 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:58.321 [132/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:58.321 [133/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:58.321 [134/268] Linking static target lib/librte_net.a 00:01:58.321 [135/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:58.321 [136/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:58.321 [137/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:58.321 [138/268] Linking static target lib/librte_timer.a 00:01:58.321 [139/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:58.321 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:58.321 [141/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:58.321 [142/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:58.321 [143/268] Linking static target lib/librte_eal.a 00:01:58.321 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:58.321 [145/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:58.321 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:58.321 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:58.321 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:58.321 [149/268] Linking static target lib/librte_compressdev.a 00:01:58.321 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:58.321 [151/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:58.321 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:58.321 [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:58.321 [154/268] Linking static target lib/librte_dmadev.a 00:01:58.321 [155/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.321 [156/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:58.321 [157/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:58.321 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:58.321 [159/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.321 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.321 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:58.321 [162/268] Linking static target lib/librte_mbuf.a 00:01:58.580 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:58.580 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:58.580 [165/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:58.580 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:58.580 [167/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.580 [168/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:58.580 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.580 [170/268] Linking static target lib/librte_reorder.a 00:01:58.580 [171/268] Linking target lib/librte_log.so.24.1 00:01:58.580 [172/268] Linking static target lib/librte_hash.a 00:01:58.580 [173/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:58.580 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:58.580 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:58.580 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:58.580 [177/268] Linking static target lib/librte_power.a 00:01:58.580 [178/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:58.580 [179/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.580 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:58.580 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:58.580 [182/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.580 [183/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:58.580 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:58.580 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:58.580 [186/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:58.580 [187/268] Linking static target lib/librte_cryptodev.a 00:01:58.580 [188/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.580 [189/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:58.580 [190/268] Linking target lib/librte_kvargs.so.24.1 00:01:58.580 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:58.580 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:58.580 [193/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:58.580 [194/268] Linking target lib/librte_telemetry.so.24.1 00:01:58.580 [195/268] Linking static target lib/librte_security.a 00:01:58.580 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:58.580 [197/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:58.839 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.839 [199/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.839 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:58.839 [201/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:58.839 [202/268] Linking static target drivers/librte_bus_vdev.a 00:01:58.839 [203/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.839 [204/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:58.839 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:58.839 [206/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:58.839 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.839 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.839 [209/268] Linking static target drivers/librte_bus_pci.a 00:01:58.839 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:58.839 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:58.839 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:58.839 [213/268] Linking static target drivers/librte_mempool_ring.a 00:01:58.839 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.099 [215/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.099 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:59.099 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.099 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.099 [219/268] Linking static target lib/librte_ethdev.a 00:01:59.099 [220/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.358 [221/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.358 [222/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:59.358 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.358 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.358 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.616 [226/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.616 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.184 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:00.184 [229/268] Linking static target lib/librte_vhost.a 00:02:00.751 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.655 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.223 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.595 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.853 [234/268] Linking target lib/librte_eal.so.24.1 00:02:10.853 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:10.853 [236/268] Linking target lib/librte_timer.so.24.1 00:02:10.853 [237/268] Linking target lib/librte_ring.so.24.1 00:02:10.853 [238/268] Linking target lib/librte_pci.so.24.1 00:02:10.853 [239/268] Linking target lib/librte_meter.so.24.1 00:02:10.853 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:10.853 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:11.110 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:11.110 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:11.110 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:11.110 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:11.110 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:11.110 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:11.110 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:11.110 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:11.367 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:11.367 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:11.367 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:11.367 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:11.367 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:11.624 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:11.624 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:11.624 [257/268] Linking target lib/librte_net.so.24.1 00:02:11.624 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:11.624 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:11.624 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:11.624 [261/268] Linking target lib/librte_hash.so.24.1 00:02:11.624 [262/268] Linking target lib/librte_security.so.24.1 00:02:11.624 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:11.624 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:11.882 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:11.882 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:11.882 [267/268] Linking target lib/librte_power.so.24.1 00:02:11.882 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:11.882 INFO: autodetecting backend as ninja 00:02:11.882 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:18.433 CC lib/ut/ut.o 00:02:18.433 CC lib/log/log.o 00:02:18.433 CC lib/ut_mock/mock.o 00:02:18.433 CC lib/log/log_flags.o 00:02:18.433 CC lib/log/log_deprecated.o 00:02:18.433 LIB libspdk_ut.a 00:02:18.433 SO libspdk_ut.so.2.0 00:02:18.433 LIB libspdk_log.a 00:02:18.433 LIB libspdk_ut_mock.a 00:02:18.433 SYMLINK libspdk_ut.so 00:02:18.433 SO libspdk_log.so.7.1 00:02:18.433 SO libspdk_ut_mock.so.6.0 00:02:18.693 SYMLINK libspdk_ut_mock.so 00:02:18.693 SYMLINK libspdk_log.so 00:02:18.951 CC lib/util/base64.o 00:02:18.951 CC lib/util/bit_array.o 00:02:18.951 CC lib/util/crc16.o 00:02:18.951 CC lib/util/cpuset.o 00:02:18.951 CC lib/util/crc32.o 00:02:18.951 CC lib/util/crc32c.o 00:02:18.951 CC lib/util/crc32_ieee.o 00:02:18.951 CC lib/util/crc64.o 00:02:18.951 CC lib/util/fd_group.o 00:02:18.951 CC lib/util/dif.o 00:02:18.951 CC lib/util/fd.o 00:02:18.951 CC lib/util/iov.o 00:02:18.951 CC lib/util/file.o 00:02:18.951 CC lib/util/hexlify.o 00:02:18.951 CC lib/util/math.o 00:02:18.951 CC lib/util/net.o 00:02:18.951 CC lib/util/pipe.o 00:02:18.951 CC lib/util/strerror_tls.o 00:02:18.951 CC lib/util/string.o 00:02:18.951 CC lib/util/uuid.o 00:02:18.951 CC lib/util/xor.o 00:02:18.951 CC lib/util/zipf.o 00:02:18.951 CC lib/util/md5.o 00:02:18.951 CXX lib/trace_parser/trace.o 00:02:18.951 CC lib/dma/dma.o 00:02:18.951 CC lib/ioat/ioat.o 00:02:19.210 CC lib/vfio_user/host/vfio_user_pci.o 00:02:19.210 CC lib/vfio_user/host/vfio_user.o 00:02:19.210 LIB libspdk_dma.a 00:02:19.210 SO libspdk_dma.so.5.0 00:02:19.210 LIB libspdk_ioat.a 00:02:19.210 SYMLINK libspdk_dma.so 00:02:19.210 SO libspdk_ioat.so.7.0 00:02:19.210 LIB libspdk_vfio_user.a 00:02:19.210 SYMLINK libspdk_ioat.so 00:02:19.468 SO libspdk_vfio_user.so.5.0 00:02:19.468 LIB libspdk_util.a 00:02:19.468 SYMLINK libspdk_vfio_user.so 00:02:19.468 SO libspdk_util.so.10.1 00:02:19.468 SYMLINK libspdk_util.so 00:02:19.727 LIB libspdk_trace_parser.a 00:02:19.727 SO libspdk_trace_parser.so.6.0 00:02:19.727 SYMLINK libspdk_trace_parser.so 00:02:19.985 CC lib/env_dpdk/env.o 00:02:19.985 CC lib/env_dpdk/memory.o 00:02:19.985 CC lib/env_dpdk/init.o 00:02:19.985 CC lib/env_dpdk/pci.o 00:02:19.985 CC lib/env_dpdk/threads.o 00:02:19.985 CC lib/env_dpdk/pci_ioat.o 00:02:19.985 CC lib/env_dpdk/sigbus_handler.o 00:02:19.985 CC lib/env_dpdk/pci_virtio.o 00:02:19.985 CC lib/env_dpdk/pci_vmd.o 00:02:19.985 CC lib/env_dpdk/pci_idxd.o 00:02:19.985 CC lib/env_dpdk/pci_event.o 00:02:19.985 CC lib/env_dpdk/pci_dpdk.o 00:02:19.985 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:19.985 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:19.985 CC lib/vmd/led.o 00:02:19.985 CC lib/vmd/vmd.o 00:02:19.985 CC lib/rdma_utils/rdma_utils.o 00:02:19.985 CC lib/json/json_write.o 00:02:19.985 CC lib/json/json_parse.o 00:02:19.985 CC lib/json/json_util.o 00:02:19.985 CC lib/conf/conf.o 00:02:19.985 CC lib/idxd/idxd.o 00:02:19.985 CC lib/idxd/idxd_user.o 00:02:19.985 CC lib/idxd/idxd_kernel.o 00:02:20.244 LIB libspdk_conf.a 00:02:20.244 SO libspdk_conf.so.6.0 00:02:20.244 LIB libspdk_rdma_utils.a 00:02:20.244 LIB libspdk_json.a 00:02:20.244 SO libspdk_rdma_utils.so.1.0 00:02:20.244 SO libspdk_json.so.6.0 00:02:20.244 SYMLINK libspdk_conf.so 00:02:20.244 SYMLINK libspdk_rdma_utils.so 00:02:20.244 SYMLINK libspdk_json.so 00:02:20.503 LIB libspdk_idxd.a 00:02:20.503 LIB libspdk_vmd.a 00:02:20.503 SO libspdk_idxd.so.12.1 00:02:20.503 SO libspdk_vmd.so.6.0 00:02:20.503 SYMLINK libspdk_idxd.so 00:02:20.503 SYMLINK libspdk_vmd.so 00:02:20.762 CC lib/rdma_provider/common.o 00:02:20.762 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:20.762 CC lib/jsonrpc/jsonrpc_server.o 00:02:20.762 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:20.762 CC lib/jsonrpc/jsonrpc_client.o 00:02:20.762 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:20.762 LIB libspdk_rdma_provider.a 00:02:20.762 SO libspdk_rdma_provider.so.7.0 00:02:20.762 LIB libspdk_jsonrpc.a 00:02:21.021 LIB libspdk_env_dpdk.a 00:02:21.021 SO libspdk_jsonrpc.so.6.0 00:02:21.021 SYMLINK libspdk_rdma_provider.so 00:02:21.021 SO libspdk_env_dpdk.so.15.1 00:02:21.021 SYMLINK libspdk_jsonrpc.so 00:02:21.021 SYMLINK libspdk_env_dpdk.so 00:02:21.280 CC lib/rpc/rpc.o 00:02:21.540 LIB libspdk_rpc.a 00:02:21.540 SO libspdk_rpc.so.6.0 00:02:21.540 SYMLINK libspdk_rpc.so 00:02:22.107 CC lib/trace/trace.o 00:02:22.107 CC lib/trace/trace_flags.o 00:02:22.107 CC lib/trace/trace_rpc.o 00:02:22.107 CC lib/notify/notify.o 00:02:22.107 CC lib/notify/notify_rpc.o 00:02:22.107 CC lib/keyring/keyring.o 00:02:22.107 CC lib/keyring/keyring_rpc.o 00:02:22.107 LIB libspdk_notify.a 00:02:22.107 SO libspdk_notify.so.6.0 00:02:22.107 LIB libspdk_trace.a 00:02:22.107 LIB libspdk_keyring.a 00:02:22.107 SO libspdk_trace.so.11.0 00:02:22.107 SYMLINK libspdk_notify.so 00:02:22.107 SO libspdk_keyring.so.2.0 00:02:22.365 SYMLINK libspdk_trace.so 00:02:22.365 SYMLINK libspdk_keyring.so 00:02:22.624 CC lib/thread/iobuf.o 00:02:22.624 CC lib/thread/thread.o 00:02:22.624 CC lib/sock/sock.o 00:02:22.624 CC lib/sock/sock_rpc.o 00:02:22.882 LIB libspdk_sock.a 00:02:22.882 SO libspdk_sock.so.10.0 00:02:23.141 SYMLINK libspdk_sock.so 00:02:23.400 CC lib/nvme/nvme_fabric.o 00:02:23.400 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:23.400 CC lib/nvme/nvme_ctrlr.o 00:02:23.400 CC lib/nvme/nvme_ns.o 00:02:23.400 CC lib/nvme/nvme_ns_cmd.o 00:02:23.400 CC lib/nvme/nvme_pcie.o 00:02:23.400 CC lib/nvme/nvme.o 00:02:23.400 CC lib/nvme/nvme_pcie_common.o 00:02:23.400 CC lib/nvme/nvme_transport.o 00:02:23.400 CC lib/nvme/nvme_qpair.o 00:02:23.400 CC lib/nvme/nvme_quirks.o 00:02:23.400 CC lib/nvme/nvme_discovery.o 00:02:23.400 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:23.400 CC lib/nvme/nvme_opal.o 00:02:23.400 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:23.400 CC lib/nvme/nvme_tcp.o 00:02:23.400 CC lib/nvme/nvme_io_msg.o 00:02:23.400 CC lib/nvme/nvme_poll_group.o 00:02:23.400 CC lib/nvme/nvme_zns.o 00:02:23.400 CC lib/nvme/nvme_stubs.o 00:02:23.400 CC lib/nvme/nvme_auth.o 00:02:23.400 CC lib/nvme/nvme_cuse.o 00:02:23.400 CC lib/nvme/nvme_rdma.o 00:02:23.659 LIB libspdk_thread.a 00:02:23.659 SO libspdk_thread.so.11.0 00:02:23.659 SYMLINK libspdk_thread.so 00:02:24.227 CC lib/init/json_config.o 00:02:24.227 CC lib/virtio/virtio_vfio_user.o 00:02:24.227 CC lib/virtio/virtio.o 00:02:24.227 CC lib/virtio/virtio_pci.o 00:02:24.227 CC lib/init/subsystem.o 00:02:24.227 CC lib/virtio/virtio_vhost_user.o 00:02:24.227 CC lib/init/subsystem_rpc.o 00:02:24.227 CC lib/accel/accel_rpc.o 00:02:24.227 CC lib/accel/accel.o 00:02:24.227 CC lib/init/rpc.o 00:02:24.227 CC lib/accel/accel_sw.o 00:02:24.227 CC lib/fsdev/fsdev.o 00:02:24.227 CC lib/fsdev/fsdev_io.o 00:02:24.227 CC lib/fsdev/fsdev_rpc.o 00:02:24.227 CC lib/blob/blobstore.o 00:02:24.227 CC lib/blob/request.o 00:02:24.227 CC lib/blob/zeroes.o 00:02:24.227 CC lib/blob/blob_bs_dev.o 00:02:24.227 LIB libspdk_init.a 00:02:24.227 SO libspdk_init.so.6.0 00:02:24.486 LIB libspdk_virtio.a 00:02:24.486 SYMLINK libspdk_init.so 00:02:24.486 SO libspdk_virtio.so.7.0 00:02:24.486 SYMLINK libspdk_virtio.so 00:02:24.486 LIB libspdk_fsdev.a 00:02:24.745 SO libspdk_fsdev.so.2.0 00:02:24.745 SYMLINK libspdk_fsdev.so 00:02:24.745 CC lib/event/app.o 00:02:24.745 CC lib/event/reactor.o 00:02:24.745 CC lib/event/log_rpc.o 00:02:24.745 CC lib/event/app_rpc.o 00:02:24.745 CC lib/event/scheduler_static.o 00:02:25.004 LIB libspdk_accel.a 00:02:25.004 SO libspdk_accel.so.16.0 00:02:25.004 LIB libspdk_nvme.a 00:02:25.004 SYMLINK libspdk_accel.so 00:02:25.004 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:25.004 LIB libspdk_event.a 00:02:25.004 SO libspdk_nvme.so.15.0 00:02:25.262 SO libspdk_event.so.14.0 00:02:25.263 SYMLINK libspdk_event.so 00:02:25.263 SYMLINK libspdk_nvme.so 00:02:25.263 CC lib/bdev/bdev_zone.o 00:02:25.263 CC lib/bdev/bdev.o 00:02:25.263 CC lib/bdev/bdev_rpc.o 00:02:25.263 CC lib/bdev/scsi_nvme.o 00:02:25.263 CC lib/bdev/part.o 00:02:25.520 LIB libspdk_fuse_dispatcher.a 00:02:25.520 SO libspdk_fuse_dispatcher.so.1.0 00:02:25.520 SYMLINK libspdk_fuse_dispatcher.so 00:02:26.452 LIB libspdk_blob.a 00:02:26.452 SO libspdk_blob.so.12.0 00:02:26.452 SYMLINK libspdk_blob.so 00:02:26.710 CC lib/lvol/lvol.o 00:02:26.710 CC lib/blobfs/blobfs.o 00:02:26.710 CC lib/blobfs/tree.o 00:02:27.275 LIB libspdk_bdev.a 00:02:27.275 SO libspdk_bdev.so.17.0 00:02:27.275 LIB libspdk_blobfs.a 00:02:27.275 SYMLINK libspdk_bdev.so 00:02:27.275 SO libspdk_blobfs.so.11.0 00:02:27.275 LIB libspdk_lvol.a 00:02:27.534 SO libspdk_lvol.so.11.0 00:02:27.534 SYMLINK libspdk_blobfs.so 00:02:27.534 SYMLINK libspdk_lvol.so 00:02:27.792 CC lib/nbd/nbd.o 00:02:27.792 CC lib/nbd/nbd_rpc.o 00:02:27.792 CC lib/ftl/ftl_core.o 00:02:27.792 CC lib/ftl/ftl_init.o 00:02:27.792 CC lib/ftl/ftl_io.o 00:02:27.792 CC lib/ftl/ftl_layout.o 00:02:27.792 CC lib/ftl/ftl_sb.o 00:02:27.792 CC lib/ftl/ftl_debug.o 00:02:27.792 CC lib/ftl/ftl_l2p.o 00:02:27.792 CC lib/ftl/ftl_l2p_flat.o 00:02:27.792 CC lib/ftl/ftl_nv_cache.o 00:02:27.792 CC lib/ftl/ftl_band.o 00:02:27.792 CC lib/ftl/ftl_band_ops.o 00:02:27.792 CC lib/ftl/ftl_writer.o 00:02:27.792 CC lib/ftl/ftl_l2p_cache.o 00:02:27.792 CC lib/ftl/ftl_p2l.o 00:02:27.792 CC lib/ftl/ftl_rq.o 00:02:27.792 CC lib/ftl/ftl_reloc.o 00:02:27.792 CC lib/ublk/ublk.o 00:02:27.792 CC lib/ublk/ublk_rpc.o 00:02:27.792 CC lib/ftl/ftl_p2l_log.o 00:02:27.792 CC lib/ftl/mngt/ftl_mngt.o 00:02:27.792 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:27.792 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:27.792 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:27.792 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:27.792 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:27.792 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:27.792 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:27.792 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:27.792 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:27.792 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:27.792 CC lib/ftl/utils/ftl_conf.o 00:02:27.792 CC lib/nvmf/ctrlr.o 00:02:27.792 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:27.792 CC lib/nvmf/ctrlr_discovery.o 00:02:27.792 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:27.792 CC lib/nvmf/ctrlr_bdev.o 00:02:27.792 CC lib/nvmf/nvmf.o 00:02:27.792 CC lib/ftl/utils/ftl_md.o 00:02:27.792 CC lib/nvmf/subsystem.o 00:02:27.792 CC lib/ftl/utils/ftl_mempool.o 00:02:27.792 CC lib/ftl/utils/ftl_bitmap.o 00:02:27.792 CC lib/nvmf/nvmf_rpc.o 00:02:27.792 CC lib/nvmf/tcp.o 00:02:27.792 CC lib/nvmf/transport.o 00:02:27.792 CC lib/ftl/utils/ftl_property.o 00:02:27.792 CC lib/nvmf/stubs.o 00:02:27.792 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:27.792 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:27.792 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:27.792 CC lib/nvmf/mdns_server.o 00:02:27.792 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:27.792 CC lib/nvmf/rdma.o 00:02:27.792 CC lib/nvmf/auth.o 00:02:27.792 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:27.792 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:27.792 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:27.792 CC lib/scsi/dev.o 00:02:27.792 CC lib/scsi/lun.o 00:02:27.792 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:27.792 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:27.792 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:27.792 CC lib/scsi/port.o 00:02:27.792 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:27.792 CC lib/scsi/scsi.o 00:02:27.792 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:27.792 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:27.792 CC lib/scsi/scsi_bdev.o 00:02:27.792 CC lib/ftl/base/ftl_base_dev.o 00:02:27.792 CC lib/scsi/scsi_rpc.o 00:02:27.792 CC lib/ftl/base/ftl_base_bdev.o 00:02:27.792 CC lib/scsi/scsi_pr.o 00:02:27.792 CC lib/ftl/ftl_trace.o 00:02:27.792 CC lib/scsi/task.o 00:02:28.358 LIB libspdk_nbd.a 00:02:28.358 SO libspdk_nbd.so.7.0 00:02:28.358 LIB libspdk_ublk.a 00:02:28.358 LIB libspdk_scsi.a 00:02:28.358 SO libspdk_ublk.so.3.0 00:02:28.358 SYMLINK libspdk_nbd.so 00:02:28.358 SO libspdk_scsi.so.9.0 00:02:28.617 SYMLINK libspdk_ublk.so 00:02:28.617 SYMLINK libspdk_scsi.so 00:02:28.617 LIB libspdk_ftl.a 00:02:28.876 SO libspdk_ftl.so.9.0 00:02:28.876 CC lib/iscsi/conn.o 00:02:28.876 CC lib/iscsi/init_grp.o 00:02:28.876 CC lib/iscsi/iscsi.o 00:02:28.876 CC lib/vhost/vhost.o 00:02:28.876 CC lib/iscsi/portal_grp.o 00:02:28.876 CC lib/iscsi/param.o 00:02:28.876 CC lib/vhost/vhost_rpc.o 00:02:28.876 CC lib/vhost/vhost_scsi.o 00:02:28.876 CC lib/iscsi/tgt_node.o 00:02:28.876 CC lib/vhost/vhost_blk.o 00:02:28.876 CC lib/iscsi/iscsi_subsystem.o 00:02:28.876 CC lib/iscsi/iscsi_rpc.o 00:02:28.876 CC lib/vhost/rte_vhost_user.o 00:02:28.876 CC lib/iscsi/task.o 00:02:29.134 SYMLINK libspdk_ftl.so 00:02:29.394 LIB libspdk_nvmf.a 00:02:29.652 SO libspdk_nvmf.so.20.0 00:02:29.652 LIB libspdk_vhost.a 00:02:29.652 SO libspdk_vhost.so.8.0 00:02:29.652 SYMLINK libspdk_nvmf.so 00:02:29.652 SYMLINK libspdk_vhost.so 00:02:29.910 LIB libspdk_iscsi.a 00:02:29.910 SO libspdk_iscsi.so.8.0 00:02:29.910 SYMLINK libspdk_iscsi.so 00:02:30.477 CC module/env_dpdk/env_dpdk_rpc.o 00:02:30.735 LIB libspdk_env_dpdk_rpc.a 00:02:30.735 CC module/scheduler/gscheduler/gscheduler.o 00:02:30.735 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:30.735 CC module/fsdev/aio/fsdev_aio.o 00:02:30.735 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:30.735 CC module/fsdev/aio/linux_aio_mgr.o 00:02:30.735 CC module/keyring/file/keyring.o 00:02:30.735 CC module/keyring/linux/keyring.o 00:02:30.735 CC module/keyring/linux/keyring_rpc.o 00:02:30.735 CC module/keyring/file/keyring_rpc.o 00:02:30.735 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:30.735 CC module/accel/error/accel_error.o 00:02:30.735 CC module/sock/posix/posix.o 00:02:30.735 CC module/accel/dsa/accel_dsa_rpc.o 00:02:30.735 CC module/accel/error/accel_error_rpc.o 00:02:30.735 CC module/accel/dsa/accel_dsa.o 00:02:30.735 CC module/blob/bdev/blob_bdev.o 00:02:30.735 CC module/accel/ioat/accel_ioat.o 00:02:30.735 CC module/accel/ioat/accel_ioat_rpc.o 00:02:30.735 SO libspdk_env_dpdk_rpc.so.6.0 00:02:30.735 CC module/accel/iaa/accel_iaa.o 00:02:30.735 CC module/accel/iaa/accel_iaa_rpc.o 00:02:30.735 SYMLINK libspdk_env_dpdk_rpc.so 00:02:30.735 LIB libspdk_scheduler_gscheduler.a 00:02:30.995 LIB libspdk_keyring_linux.a 00:02:30.995 LIB libspdk_scheduler_dpdk_governor.a 00:02:30.995 LIB libspdk_keyring_file.a 00:02:30.995 SO libspdk_keyring_linux.so.1.0 00:02:30.995 SO libspdk_scheduler_gscheduler.so.4.0 00:02:30.995 LIB libspdk_scheduler_dynamic.a 00:02:30.995 SO libspdk_keyring_file.so.2.0 00:02:30.995 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:30.995 LIB libspdk_accel_ioat.a 00:02:30.995 LIB libspdk_accel_error.a 00:02:30.995 SO libspdk_scheduler_dynamic.so.4.0 00:02:30.995 SYMLINK libspdk_keyring_linux.so 00:02:30.995 SYMLINK libspdk_scheduler_gscheduler.so 00:02:30.995 LIB libspdk_accel_iaa.a 00:02:30.995 SO libspdk_accel_ioat.so.6.0 00:02:30.995 SO libspdk_accel_error.so.2.0 00:02:30.995 SYMLINK libspdk_keyring_file.so 00:02:30.995 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:30.995 LIB libspdk_blob_bdev.a 00:02:30.995 SO libspdk_accel_iaa.so.3.0 00:02:30.995 LIB libspdk_accel_dsa.a 00:02:30.995 SYMLINK libspdk_scheduler_dynamic.so 00:02:30.995 SYMLINK libspdk_accel_ioat.so 00:02:30.995 SO libspdk_accel_dsa.so.5.0 00:02:30.995 SO libspdk_blob_bdev.so.12.0 00:02:30.995 SYMLINK libspdk_accel_error.so 00:02:30.995 SYMLINK libspdk_accel_iaa.so 00:02:30.995 SYMLINK libspdk_blob_bdev.so 00:02:30.995 SYMLINK libspdk_accel_dsa.so 00:02:31.254 LIB libspdk_fsdev_aio.a 00:02:31.254 SO libspdk_fsdev_aio.so.1.0 00:02:31.254 LIB libspdk_sock_posix.a 00:02:31.254 SO libspdk_sock_posix.so.6.0 00:02:31.513 SYMLINK libspdk_fsdev_aio.so 00:02:31.513 SYMLINK libspdk_sock_posix.so 00:02:31.513 CC module/bdev/malloc/bdev_malloc.o 00:02:31.513 CC module/bdev/error/vbdev_error_rpc.o 00:02:31.513 CC module/bdev/error/vbdev_error.o 00:02:31.513 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:31.513 CC module/bdev/delay/vbdev_delay.o 00:02:31.513 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:31.513 CC module/bdev/passthru/vbdev_passthru.o 00:02:31.513 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:31.513 CC module/blobfs/bdev/blobfs_bdev.o 00:02:31.513 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:31.513 CC module/bdev/gpt/gpt.o 00:02:31.513 CC module/bdev/gpt/vbdev_gpt.o 00:02:31.514 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:31.514 CC module/bdev/lvol/vbdev_lvol.o 00:02:31.514 CC module/bdev/nvme/bdev_nvme.o 00:02:31.514 CC module/bdev/aio/bdev_aio.o 00:02:31.514 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:31.514 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:31.514 CC module/bdev/iscsi/bdev_iscsi.o 00:02:31.514 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:31.514 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:31.514 CC module/bdev/aio/bdev_aio_rpc.o 00:02:31.514 CC module/bdev/nvme/nvme_rpc.o 00:02:31.514 CC module/bdev/nvme/bdev_mdns_client.o 00:02:31.514 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:31.514 CC module/bdev/nvme/vbdev_opal.o 00:02:31.514 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:31.514 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:31.514 CC module/bdev/split/vbdev_split.o 00:02:31.514 CC module/bdev/split/vbdev_split_rpc.o 00:02:31.514 CC module/bdev/null/bdev_null.o 00:02:31.514 CC module/bdev/null/bdev_null_rpc.o 00:02:31.514 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:31.514 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:31.514 CC module/bdev/raid/bdev_raid.o 00:02:31.514 CC module/bdev/raid/bdev_raid_rpc.o 00:02:31.514 CC module/bdev/raid/raid1.o 00:02:31.514 CC module/bdev/raid/raid0.o 00:02:31.514 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:31.514 CC module/bdev/raid/bdev_raid_sb.o 00:02:31.514 CC module/bdev/ftl/bdev_ftl.o 00:02:31.514 CC module/bdev/raid/concat.o 00:02:31.772 LIB libspdk_blobfs_bdev.a 00:02:31.772 SO libspdk_blobfs_bdev.so.6.0 00:02:31.772 LIB libspdk_bdev_error.a 00:02:32.031 LIB libspdk_bdev_split.a 00:02:32.031 SO libspdk_bdev_error.so.6.0 00:02:32.031 LIB libspdk_bdev_gpt.a 00:02:32.031 LIB libspdk_bdev_null.a 00:02:32.031 LIB libspdk_bdev_passthru.a 00:02:32.031 SO libspdk_bdev_split.so.6.0 00:02:32.031 LIB libspdk_bdev_ftl.a 00:02:32.031 LIB libspdk_bdev_aio.a 00:02:32.031 SYMLINK libspdk_blobfs_bdev.so 00:02:32.031 SO libspdk_bdev_gpt.so.6.0 00:02:32.031 LIB libspdk_bdev_malloc.a 00:02:32.031 LIB libspdk_bdev_zone_block.a 00:02:32.031 SO libspdk_bdev_passthru.so.6.0 00:02:32.031 SO libspdk_bdev_null.so.6.0 00:02:32.031 LIB libspdk_bdev_delay.a 00:02:32.031 SO libspdk_bdev_aio.so.6.0 00:02:32.031 SYMLINK libspdk_bdev_error.so 00:02:32.031 SO libspdk_bdev_ftl.so.6.0 00:02:32.031 SO libspdk_bdev_malloc.so.6.0 00:02:32.031 SYMLINK libspdk_bdev_split.so 00:02:32.031 LIB libspdk_bdev_iscsi.a 00:02:32.031 SO libspdk_bdev_zone_block.so.6.0 00:02:32.031 SO libspdk_bdev_delay.so.6.0 00:02:32.031 SYMLINK libspdk_bdev_gpt.so 00:02:32.031 SYMLINK libspdk_bdev_passthru.so 00:02:32.031 SYMLINK libspdk_bdev_aio.so 00:02:32.031 SYMLINK libspdk_bdev_null.so 00:02:32.031 SO libspdk_bdev_iscsi.so.6.0 00:02:32.031 SYMLINK libspdk_bdev_malloc.so 00:02:32.031 SYMLINK libspdk_bdev_ftl.so 00:02:32.031 SYMLINK libspdk_bdev_zone_block.so 00:02:32.031 SYMLINK libspdk_bdev_delay.so 00:02:32.031 LIB libspdk_bdev_lvol.a 00:02:32.031 SYMLINK libspdk_bdev_iscsi.so 00:02:32.031 LIB libspdk_bdev_virtio.a 00:02:32.031 SO libspdk_bdev_lvol.so.6.0 00:02:32.291 SO libspdk_bdev_virtio.so.6.0 00:02:32.291 SYMLINK libspdk_bdev_lvol.so 00:02:32.291 SYMLINK libspdk_bdev_virtio.so 00:02:32.551 LIB libspdk_bdev_raid.a 00:02:32.551 SO libspdk_bdev_raid.so.6.0 00:02:32.551 SYMLINK libspdk_bdev_raid.so 00:02:33.487 LIB libspdk_bdev_nvme.a 00:02:33.487 SO libspdk_bdev_nvme.so.7.1 00:02:33.746 SYMLINK libspdk_bdev_nvme.so 00:02:34.314 CC module/event/subsystems/scheduler/scheduler.o 00:02:34.314 CC module/event/subsystems/iobuf/iobuf.o 00:02:34.314 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:34.314 CC module/event/subsystems/fsdev/fsdev.o 00:02:34.314 CC module/event/subsystems/sock/sock.o 00:02:34.315 CC module/event/subsystems/vmd/vmd.o 00:02:34.315 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:34.315 CC module/event/subsystems/keyring/keyring.o 00:02:34.315 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:34.574 LIB libspdk_event_scheduler.a 00:02:34.574 SO libspdk_event_scheduler.so.4.0 00:02:34.574 LIB libspdk_event_keyring.a 00:02:34.574 LIB libspdk_event_fsdev.a 00:02:34.574 LIB libspdk_event_sock.a 00:02:34.574 LIB libspdk_event_vmd.a 00:02:34.574 LIB libspdk_event_iobuf.a 00:02:34.574 LIB libspdk_event_vhost_blk.a 00:02:34.574 SO libspdk_event_fsdev.so.1.0 00:02:34.574 SO libspdk_event_keyring.so.1.0 00:02:34.574 SO libspdk_event_iobuf.so.3.0 00:02:34.574 SO libspdk_event_vhost_blk.so.3.0 00:02:34.574 SO libspdk_event_sock.so.5.0 00:02:34.574 SYMLINK libspdk_event_scheduler.so 00:02:34.574 SO libspdk_event_vmd.so.6.0 00:02:34.574 SYMLINK libspdk_event_keyring.so 00:02:34.574 SYMLINK libspdk_event_fsdev.so 00:02:34.574 SYMLINK libspdk_event_iobuf.so 00:02:34.574 SYMLINK libspdk_event_sock.so 00:02:34.574 SYMLINK libspdk_event_vhost_blk.so 00:02:34.574 SYMLINK libspdk_event_vmd.so 00:02:34.834 CC module/event/subsystems/accel/accel.o 00:02:35.093 LIB libspdk_event_accel.a 00:02:35.093 SO libspdk_event_accel.so.6.0 00:02:35.093 SYMLINK libspdk_event_accel.so 00:02:35.661 CC module/event/subsystems/bdev/bdev.o 00:02:35.661 LIB libspdk_event_bdev.a 00:02:35.661 SO libspdk_event_bdev.so.6.0 00:02:35.921 SYMLINK libspdk_event_bdev.so 00:02:36.180 CC module/event/subsystems/nbd/nbd.o 00:02:36.180 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:36.180 CC module/event/subsystems/ublk/ublk.o 00:02:36.180 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:36.180 CC module/event/subsystems/scsi/scsi.o 00:02:36.180 LIB libspdk_event_nbd.a 00:02:36.439 SO libspdk_event_nbd.so.6.0 00:02:36.439 LIB libspdk_event_ublk.a 00:02:36.439 LIB libspdk_event_scsi.a 00:02:36.439 SO libspdk_event_ublk.so.3.0 00:02:36.439 SO libspdk_event_scsi.so.6.0 00:02:36.439 SYMLINK libspdk_event_nbd.so 00:02:36.439 LIB libspdk_event_nvmf.a 00:02:36.439 SYMLINK libspdk_event_ublk.so 00:02:36.439 SO libspdk_event_nvmf.so.6.0 00:02:36.439 SYMLINK libspdk_event_scsi.so 00:02:36.439 SYMLINK libspdk_event_nvmf.so 00:02:36.697 CC module/event/subsystems/iscsi/iscsi.o 00:02:36.697 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:36.957 LIB libspdk_event_iscsi.a 00:02:36.957 LIB libspdk_event_vhost_scsi.a 00:02:36.957 SO libspdk_event_iscsi.so.6.0 00:02:36.957 SO libspdk_event_vhost_scsi.so.3.0 00:02:36.957 SYMLINK libspdk_event_iscsi.so 00:02:36.957 SYMLINK libspdk_event_vhost_scsi.so 00:02:37.215 SO libspdk.so.6.0 00:02:37.215 SYMLINK libspdk.so 00:02:37.474 CC app/trace_record/trace_record.o 00:02:37.749 CC app/spdk_top/spdk_top.o 00:02:37.749 CC app/spdk_nvme_identify/identify.o 00:02:37.749 CXX app/trace/trace.o 00:02:37.749 TEST_HEADER include/spdk/accel.h 00:02:37.749 TEST_HEADER include/spdk/accel_module.h 00:02:37.749 TEST_HEADER include/spdk/assert.h 00:02:37.749 TEST_HEADER include/spdk/base64.h 00:02:37.749 TEST_HEADER include/spdk/barrier.h 00:02:37.749 CC app/spdk_nvme_discover/discovery_aer.o 00:02:37.749 TEST_HEADER include/spdk/bdev.h 00:02:37.749 TEST_HEADER include/spdk/bdev_module.h 00:02:37.749 TEST_HEADER include/spdk/bdev_zone.h 00:02:37.749 TEST_HEADER include/spdk/bit_array.h 00:02:37.749 TEST_HEADER include/spdk/blob_bdev.h 00:02:37.749 TEST_HEADER include/spdk/bit_pool.h 00:02:37.749 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:37.749 TEST_HEADER include/spdk/blobfs.h 00:02:37.749 CC test/rpc_client/rpc_client_test.o 00:02:37.749 TEST_HEADER include/spdk/blob.h 00:02:37.749 CC app/spdk_nvme_perf/perf.o 00:02:37.749 TEST_HEADER include/spdk/config.h 00:02:37.749 TEST_HEADER include/spdk/conf.h 00:02:37.749 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:37.749 TEST_HEADER include/spdk/crc16.h 00:02:37.749 TEST_HEADER include/spdk/cpuset.h 00:02:37.749 TEST_HEADER include/spdk/crc32.h 00:02:37.749 TEST_HEADER include/spdk/crc64.h 00:02:37.749 TEST_HEADER include/spdk/dma.h 00:02:37.749 CC app/spdk_lspci/spdk_lspci.o 00:02:37.749 TEST_HEADER include/spdk/dif.h 00:02:37.749 TEST_HEADER include/spdk/env_dpdk.h 00:02:37.749 TEST_HEADER include/spdk/endian.h 00:02:37.749 TEST_HEADER include/spdk/event.h 00:02:37.749 TEST_HEADER include/spdk/env.h 00:02:37.749 TEST_HEADER include/spdk/fd_group.h 00:02:37.749 TEST_HEADER include/spdk/file.h 00:02:37.749 TEST_HEADER include/spdk/fd.h 00:02:37.749 TEST_HEADER include/spdk/fsdev.h 00:02:37.749 TEST_HEADER include/spdk/fsdev_module.h 00:02:37.749 TEST_HEADER include/spdk/ftl.h 00:02:37.749 TEST_HEADER include/spdk/hexlify.h 00:02:37.749 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:37.749 TEST_HEADER include/spdk/gpt_spec.h 00:02:37.749 TEST_HEADER include/spdk/idxd.h 00:02:37.749 TEST_HEADER include/spdk/histogram_data.h 00:02:37.749 CC app/nvmf_tgt/nvmf_main.o 00:02:37.749 TEST_HEADER include/spdk/idxd_spec.h 00:02:37.749 CC app/spdk_dd/spdk_dd.o 00:02:37.749 TEST_HEADER include/spdk/ioat.h 00:02:37.749 TEST_HEADER include/spdk/init.h 00:02:37.749 TEST_HEADER include/spdk/ioat_spec.h 00:02:37.749 TEST_HEADER include/spdk/iscsi_spec.h 00:02:37.749 TEST_HEADER include/spdk/json.h 00:02:37.749 TEST_HEADER include/spdk/jsonrpc.h 00:02:37.749 TEST_HEADER include/spdk/keyring_module.h 00:02:37.749 TEST_HEADER include/spdk/keyring.h 00:02:37.749 TEST_HEADER include/spdk/log.h 00:02:37.749 TEST_HEADER include/spdk/likely.h 00:02:37.749 TEST_HEADER include/spdk/lvol.h 00:02:37.749 TEST_HEADER include/spdk/memory.h 00:02:37.749 TEST_HEADER include/spdk/md5.h 00:02:37.749 TEST_HEADER include/spdk/mmio.h 00:02:37.749 TEST_HEADER include/spdk/nbd.h 00:02:37.749 TEST_HEADER include/spdk/net.h 00:02:37.749 TEST_HEADER include/spdk/nvme.h 00:02:37.749 TEST_HEADER include/spdk/notify.h 00:02:37.749 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:37.749 TEST_HEADER include/spdk/nvme_intel.h 00:02:37.749 TEST_HEADER include/spdk/nvme_spec.h 00:02:37.749 TEST_HEADER include/spdk/nvme_zns.h 00:02:37.749 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:37.749 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:37.749 TEST_HEADER include/spdk/nvmf.h 00:02:37.749 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:37.749 TEST_HEADER include/spdk/nvmf_transport.h 00:02:37.749 TEST_HEADER include/spdk/opal.h 00:02:37.749 TEST_HEADER include/spdk/nvmf_spec.h 00:02:37.749 TEST_HEADER include/spdk/opal_spec.h 00:02:37.749 TEST_HEADER include/spdk/pci_ids.h 00:02:37.749 TEST_HEADER include/spdk/pipe.h 00:02:37.749 TEST_HEADER include/spdk/queue.h 00:02:37.750 TEST_HEADER include/spdk/scheduler.h 00:02:37.750 CC app/iscsi_tgt/iscsi_tgt.o 00:02:37.750 TEST_HEADER include/spdk/reduce.h 00:02:37.750 TEST_HEADER include/spdk/scsi.h 00:02:37.750 TEST_HEADER include/spdk/rpc.h 00:02:37.750 TEST_HEADER include/spdk/sock.h 00:02:37.750 CC app/spdk_tgt/spdk_tgt.o 00:02:37.750 TEST_HEADER include/spdk/scsi_spec.h 00:02:37.750 TEST_HEADER include/spdk/stdinc.h 00:02:37.750 TEST_HEADER include/spdk/string.h 00:02:37.750 TEST_HEADER include/spdk/trace.h 00:02:37.750 TEST_HEADER include/spdk/thread.h 00:02:37.750 TEST_HEADER include/spdk/trace_parser.h 00:02:37.750 TEST_HEADER include/spdk/util.h 00:02:37.750 TEST_HEADER include/spdk/tree.h 00:02:37.750 TEST_HEADER include/spdk/ublk.h 00:02:37.750 TEST_HEADER include/spdk/uuid.h 00:02:37.750 TEST_HEADER include/spdk/version.h 00:02:37.750 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:37.750 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:37.750 TEST_HEADER include/spdk/vmd.h 00:02:37.750 TEST_HEADER include/spdk/vhost.h 00:02:37.750 TEST_HEADER include/spdk/zipf.h 00:02:37.750 TEST_HEADER include/spdk/xor.h 00:02:37.750 CXX test/cpp_headers/accel.o 00:02:37.750 CXX test/cpp_headers/accel_module.o 00:02:37.750 CXX test/cpp_headers/assert.o 00:02:37.750 CXX test/cpp_headers/barrier.o 00:02:37.750 CXX test/cpp_headers/bdev.o 00:02:37.750 CXX test/cpp_headers/bdev_module.o 00:02:37.750 CXX test/cpp_headers/base64.o 00:02:37.750 CXX test/cpp_headers/bdev_zone.o 00:02:37.750 CXX test/cpp_headers/bit_pool.o 00:02:37.750 CXX test/cpp_headers/blob_bdev.o 00:02:37.750 CXX test/cpp_headers/bit_array.o 00:02:37.750 CXX test/cpp_headers/blobfs_bdev.o 00:02:37.750 CXX test/cpp_headers/blobfs.o 00:02:37.750 CXX test/cpp_headers/blob.o 00:02:37.750 CXX test/cpp_headers/conf.o 00:02:37.750 CXX test/cpp_headers/config.o 00:02:37.750 CXX test/cpp_headers/crc16.o 00:02:37.750 CXX test/cpp_headers/cpuset.o 00:02:37.750 CXX test/cpp_headers/crc32.o 00:02:37.750 CXX test/cpp_headers/crc64.o 00:02:37.750 CXX test/cpp_headers/dif.o 00:02:37.750 CXX test/cpp_headers/dma.o 00:02:37.750 CXX test/cpp_headers/endian.o 00:02:37.750 CXX test/cpp_headers/env_dpdk.o 00:02:37.750 CXX test/cpp_headers/env.o 00:02:37.750 CXX test/cpp_headers/event.o 00:02:37.750 CXX test/cpp_headers/fd_group.o 00:02:37.750 CXX test/cpp_headers/fd.o 00:02:37.750 CXX test/cpp_headers/file.o 00:02:37.750 CXX test/cpp_headers/fsdev.o 00:02:37.750 CXX test/cpp_headers/fuse_dispatcher.o 00:02:37.750 CXX test/cpp_headers/ftl.o 00:02:37.750 CXX test/cpp_headers/gpt_spec.o 00:02:37.750 CXX test/cpp_headers/fsdev_module.o 00:02:37.750 CXX test/cpp_headers/idxd.o 00:02:37.750 CXX test/cpp_headers/histogram_data.o 00:02:37.750 CXX test/cpp_headers/hexlify.o 00:02:37.750 CXX test/cpp_headers/idxd_spec.o 00:02:37.750 CXX test/cpp_headers/ioat.o 00:02:37.750 CXX test/cpp_headers/init.o 00:02:37.750 CXX test/cpp_headers/ioat_spec.o 00:02:37.750 CXX test/cpp_headers/iscsi_spec.o 00:02:37.750 CXX test/cpp_headers/json.o 00:02:37.750 CXX test/cpp_headers/keyring.o 00:02:37.750 CXX test/cpp_headers/jsonrpc.o 00:02:37.750 CXX test/cpp_headers/likely.o 00:02:37.750 CXX test/cpp_headers/keyring_module.o 00:02:37.750 CXX test/cpp_headers/log.o 00:02:37.750 CXX test/cpp_headers/lvol.o 00:02:37.750 CXX test/cpp_headers/memory.o 00:02:37.750 CXX test/cpp_headers/mmio.o 00:02:37.750 CXX test/cpp_headers/md5.o 00:02:37.750 CXX test/cpp_headers/notify.o 00:02:37.750 CXX test/cpp_headers/nbd.o 00:02:37.750 CXX test/cpp_headers/net.o 00:02:37.750 CXX test/cpp_headers/nvme.o 00:02:37.750 CXX test/cpp_headers/nvme_intel.o 00:02:37.750 CXX test/cpp_headers/nvme_ocssd.o 00:02:37.750 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:37.750 CXX test/cpp_headers/nvmf_cmd.o 00:02:37.750 CXX test/cpp_headers/nvme_spec.o 00:02:37.750 CXX test/cpp_headers/nvme_zns.o 00:02:37.750 CC test/app/histogram_perf/histogram_perf.o 00:02:37.750 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:37.750 CXX test/cpp_headers/nvmf.o 00:02:37.750 CXX test/cpp_headers/nvmf_spec.o 00:02:37.750 CC test/app/stub/stub.o 00:02:37.750 CXX test/cpp_headers/nvmf_transport.o 00:02:37.750 CXX test/cpp_headers/opal.o 00:02:37.750 CXX test/cpp_headers/opal_spec.o 00:02:37.750 CXX test/cpp_headers/pci_ids.o 00:02:37.750 CXX test/cpp_headers/pipe.o 00:02:37.750 CXX test/cpp_headers/queue.o 00:02:37.750 CXX test/cpp_headers/reduce.o 00:02:37.750 CXX test/cpp_headers/rpc.o 00:02:37.750 CXX test/cpp_headers/scheduler.o 00:02:37.750 CXX test/cpp_headers/scsi_spec.o 00:02:37.750 CXX test/cpp_headers/scsi.o 00:02:37.750 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:37.750 CXX test/cpp_headers/sock.o 00:02:37.750 CXX test/cpp_headers/stdinc.o 00:02:37.750 CXX test/cpp_headers/string.o 00:02:37.750 CC test/thread/poller_perf/poller_perf.o 00:02:37.750 CXX test/cpp_headers/thread.o 00:02:37.750 CC test/app/jsoncat/jsoncat.o 00:02:37.750 CXX test/cpp_headers/trace.o 00:02:37.750 CC test/env/vtophys/vtophys.o 00:02:37.750 CC app/fio/nvme/fio_plugin.o 00:02:37.750 CC test/env/memory/memory_ut.o 00:02:37.750 CC examples/util/zipf/zipf.o 00:02:37.750 CC test/dma/test_dma/test_dma.o 00:02:37.750 CC examples/ioat/perf/perf.o 00:02:37.750 CC examples/ioat/verify/verify.o 00:02:37.750 CXX test/cpp_headers/trace_parser.o 00:02:37.750 CC test/app/bdev_svc/bdev_svc.o 00:02:37.750 CC test/env/pci/pci_ut.o 00:02:38.027 CC app/fio/bdev/fio_plugin.o 00:02:38.027 CXX test/cpp_headers/tree.o 00:02:38.027 LINK spdk_lspci 00:02:38.308 CXX test/cpp_headers/ublk.o 00:02:38.308 LINK interrupt_tgt 00:02:38.308 LINK rpc_client_test 00:02:38.308 LINK spdk_nvme_discover 00:02:38.308 LINK nvmf_tgt 00:02:38.308 CC test/env/mem_callbacks/mem_callbacks.o 00:02:38.308 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:38.568 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:38.568 LINK spdk_trace_record 00:02:38.568 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:38.568 LINK iscsi_tgt 00:02:38.568 LINK spdk_tgt 00:02:38.568 LINK histogram_perf 00:02:38.568 LINK jsoncat 00:02:38.568 LINK poller_perf 00:02:38.568 LINK zipf 00:02:38.568 LINK vtophys 00:02:38.568 LINK env_dpdk_post_init 00:02:38.568 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:38.568 CXX test/cpp_headers/util.o 00:02:38.568 CXX test/cpp_headers/uuid.o 00:02:38.568 CXX test/cpp_headers/version.o 00:02:38.568 LINK stub 00:02:38.568 CXX test/cpp_headers/vfio_user_pci.o 00:02:38.568 CXX test/cpp_headers/vfio_user_spec.o 00:02:38.568 CXX test/cpp_headers/vhost.o 00:02:38.568 CXX test/cpp_headers/vmd.o 00:02:38.568 CXX test/cpp_headers/xor.o 00:02:38.568 CXX test/cpp_headers/zipf.o 00:02:38.568 LINK bdev_svc 00:02:38.568 LINK verify 00:02:38.568 LINK spdk_dd 00:02:38.568 LINK ioat_perf 00:02:38.826 LINK spdk_trace 00:02:38.826 LINK pci_ut 00:02:38.826 LINK spdk_nvme 00:02:38.826 LINK test_dma 00:02:38.826 LINK spdk_bdev 00:02:39.084 LINK vhost_fuzz 00:02:39.084 LINK nvme_fuzz 00:02:39.084 LINK spdk_nvme_perf 00:02:39.084 LINK spdk_top 00:02:39.084 LINK spdk_nvme_identify 00:02:39.084 LINK mem_callbacks 00:02:39.084 CC test/event/reactor_perf/reactor_perf.o 00:02:39.084 CC test/event/event_perf/event_perf.o 00:02:39.084 CC test/event/reactor/reactor.o 00:02:39.084 CC examples/sock/hello_world/hello_sock.o 00:02:39.084 CC app/vhost/vhost.o 00:02:39.084 CC examples/idxd/perf/perf.o 00:02:39.084 CC examples/vmd/led/led.o 00:02:39.084 CC examples/vmd/lsvmd/lsvmd.o 00:02:39.084 CC test/event/scheduler/scheduler.o 00:02:39.084 CC test/event/app_repeat/app_repeat.o 00:02:39.084 CC examples/thread/thread/thread_ex.o 00:02:39.343 LINK reactor_perf 00:02:39.343 LINK event_perf 00:02:39.343 LINK led 00:02:39.343 LINK lsvmd 00:02:39.343 LINK reactor 00:02:39.343 LINK vhost 00:02:39.343 LINK app_repeat 00:02:39.343 LINK hello_sock 00:02:39.343 LINK scheduler 00:02:39.343 LINK thread 00:02:39.343 CC test/nvme/sgl/sgl.o 00:02:39.343 CC test/nvme/reserve/reserve.o 00:02:39.343 CC test/nvme/simple_copy/simple_copy.o 00:02:39.343 CC test/nvme/fused_ordering/fused_ordering.o 00:02:39.343 CC test/nvme/startup/startup.o 00:02:39.343 CC test/nvme/err_injection/err_injection.o 00:02:39.343 LINK idxd_perf 00:02:39.343 CC test/nvme/boot_partition/boot_partition.o 00:02:39.343 CC test/nvme/fdp/fdp.o 00:02:39.343 LINK memory_ut 00:02:39.343 CC test/nvme/aer/aer.o 00:02:39.343 CC test/nvme/connect_stress/connect_stress.o 00:02:39.343 CC test/nvme/cuse/cuse.o 00:02:39.343 CC test/nvme/reset/reset.o 00:02:39.343 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:39.343 CC test/nvme/overhead/overhead.o 00:02:39.343 CC test/nvme/e2edp/nvme_dp.o 00:02:39.343 CC test/nvme/compliance/nvme_compliance.o 00:02:39.343 CC test/accel/dif/dif.o 00:02:39.343 CC test/blobfs/mkfs/mkfs.o 00:02:39.601 CC test/lvol/esnap/esnap.o 00:02:39.601 LINK boot_partition 00:02:39.601 LINK err_injection 00:02:39.601 LINK startup 00:02:39.601 LINK connect_stress 00:02:39.601 LINK doorbell_aers 00:02:39.601 LINK fused_ordering 00:02:39.601 LINK reserve 00:02:39.601 LINK simple_copy 00:02:39.601 LINK sgl 00:02:39.601 LINK mkfs 00:02:39.601 LINK reset 00:02:39.601 LINK aer 00:02:39.601 LINK overhead 00:02:39.601 LINK nvme_dp 00:02:39.859 LINK fdp 00:02:39.859 LINK nvme_compliance 00:02:39.859 CC examples/nvme/reconnect/reconnect.o 00:02:39.859 CC examples/nvme/abort/abort.o 00:02:39.859 CC examples/nvme/arbitration/arbitration.o 00:02:39.859 CC examples/nvme/hotplug/hotplug.o 00:02:39.859 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:39.859 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:39.859 CC examples/nvme/hello_world/hello_world.o 00:02:39.859 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:39.859 LINK iscsi_fuzz 00:02:39.859 CC examples/accel/perf/accel_perf.o 00:02:39.859 CC examples/blob/hello_world/hello_blob.o 00:02:39.859 CC examples/blob/cli/blobcli.o 00:02:39.859 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:40.117 LINK dif 00:02:40.117 LINK cmb_copy 00:02:40.117 LINK pmr_persistence 00:02:40.117 LINK hotplug 00:02:40.117 LINK hello_world 00:02:40.117 LINK arbitration 00:02:40.117 LINK reconnect 00:02:40.117 LINK abort 00:02:40.117 LINK hello_blob 00:02:40.117 LINK hello_fsdev 00:02:40.117 LINK nvme_manage 00:02:40.375 LINK accel_perf 00:02:40.375 LINK blobcli 00:02:40.375 LINK cuse 00:02:40.633 CC test/bdev/bdevio/bdevio.o 00:02:40.890 CC examples/bdev/bdevperf/bdevperf.o 00:02:40.890 CC examples/bdev/hello_world/hello_bdev.o 00:02:40.890 LINK bdevio 00:02:41.148 LINK hello_bdev 00:02:41.406 LINK bdevperf 00:02:41.972 CC examples/nvmf/nvmf/nvmf.o 00:02:42.230 LINK nvmf 00:02:43.164 LINK esnap 00:02:43.422 00:02:43.422 real 0m54.995s 00:02:43.422 user 7m41.847s 00:02:43.422 sys 4m10.953s 00:02:43.422 12:41:09 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:43.422 12:41:09 make -- common/autotest_common.sh@10 -- $ set +x 00:02:43.422 ************************************ 00:02:43.422 END TEST make 00:02:43.422 ************************************ 00:02:43.422 12:41:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:43.422 12:41:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:43.422 12:41:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:43.422 12:41:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.422 12:41:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:43.422 12:41:09 -- pm/common@44 -- $ pid=3877319 00:02:43.422 12:41:09 -- pm/common@50 -- $ kill -TERM 3877319 00:02:43.422 12:41:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.422 12:41:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:43.422 12:41:09 -- pm/common@44 -- $ pid=3877321 00:02:43.422 12:41:09 -- pm/common@50 -- $ kill -TERM 3877321 00:02:43.422 12:41:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.422 12:41:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:43.422 12:41:09 -- pm/common@44 -- $ pid=3877323 00:02:43.422 12:41:09 -- pm/common@50 -- $ kill -TERM 3877323 00:02:43.422 12:41:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.422 12:41:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:43.422 12:41:09 -- pm/common@44 -- $ pid=3877347 00:02:43.422 12:41:09 -- pm/common@50 -- $ sudo -E kill -TERM 3877347 00:02:43.422 12:41:09 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:43.422 12:41:09 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:43.422 12:41:09 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:43.422 12:41:09 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:43.422 12:41:09 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:43.681 12:41:09 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:43.681 12:41:09 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:43.681 12:41:09 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:43.681 12:41:09 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:43.681 12:41:09 -- scripts/common.sh@336 -- # IFS=.-: 00:02:43.681 12:41:09 -- scripts/common.sh@336 -- # read -ra ver1 00:02:43.681 12:41:09 -- scripts/common.sh@337 -- # IFS=.-: 00:02:43.681 12:41:09 -- scripts/common.sh@337 -- # read -ra ver2 00:02:43.681 12:41:09 -- scripts/common.sh@338 -- # local 'op=<' 00:02:43.681 12:41:09 -- scripts/common.sh@340 -- # ver1_l=2 00:02:43.681 12:41:09 -- scripts/common.sh@341 -- # ver2_l=1 00:02:43.681 12:41:09 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:43.681 12:41:09 -- scripts/common.sh@344 -- # case "$op" in 00:02:43.681 12:41:09 -- scripts/common.sh@345 -- # : 1 00:02:43.681 12:41:09 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:43.681 12:41:09 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:43.681 12:41:09 -- scripts/common.sh@365 -- # decimal 1 00:02:43.681 12:41:09 -- scripts/common.sh@353 -- # local d=1 00:02:43.681 12:41:09 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:43.681 12:41:09 -- scripts/common.sh@355 -- # echo 1 00:02:43.681 12:41:09 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:43.681 12:41:09 -- scripts/common.sh@366 -- # decimal 2 00:02:43.681 12:41:09 -- scripts/common.sh@353 -- # local d=2 00:02:43.681 12:41:09 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:43.681 12:41:09 -- scripts/common.sh@355 -- # echo 2 00:02:43.681 12:41:09 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:43.681 12:41:09 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:43.681 12:41:09 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:43.681 12:41:09 -- scripts/common.sh@368 -- # return 0 00:02:43.681 12:41:09 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:43.681 12:41:09 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:43.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.681 --rc genhtml_branch_coverage=1 00:02:43.681 --rc genhtml_function_coverage=1 00:02:43.681 --rc genhtml_legend=1 00:02:43.681 --rc geninfo_all_blocks=1 00:02:43.681 --rc geninfo_unexecuted_blocks=1 00:02:43.681 00:02:43.681 ' 00:02:43.681 12:41:09 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:43.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.681 --rc genhtml_branch_coverage=1 00:02:43.681 --rc genhtml_function_coverage=1 00:02:43.681 --rc genhtml_legend=1 00:02:43.681 --rc geninfo_all_blocks=1 00:02:43.681 --rc geninfo_unexecuted_blocks=1 00:02:43.681 00:02:43.681 ' 00:02:43.681 12:41:09 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:43.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.681 --rc genhtml_branch_coverage=1 00:02:43.681 --rc genhtml_function_coverage=1 00:02:43.681 --rc genhtml_legend=1 00:02:43.681 --rc geninfo_all_blocks=1 00:02:43.681 --rc geninfo_unexecuted_blocks=1 00:02:43.681 00:02:43.681 ' 00:02:43.681 12:41:09 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:43.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.681 --rc genhtml_branch_coverage=1 00:02:43.681 --rc genhtml_function_coverage=1 00:02:43.681 --rc genhtml_legend=1 00:02:43.681 --rc geninfo_all_blocks=1 00:02:43.681 --rc geninfo_unexecuted_blocks=1 00:02:43.681 00:02:43.681 ' 00:02:43.681 12:41:09 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:43.681 12:41:09 -- nvmf/common.sh@7 -- # uname -s 00:02:43.681 12:41:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:43.681 12:41:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:43.681 12:41:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:43.681 12:41:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:43.681 12:41:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:43.681 12:41:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:43.681 12:41:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:43.681 12:41:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:43.681 12:41:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:43.681 12:41:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:43.681 12:41:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:02:43.681 12:41:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:02:43.681 12:41:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:43.681 12:41:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:43.681 12:41:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:43.681 12:41:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:43.681 12:41:09 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:43.681 12:41:09 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:43.681 12:41:09 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:43.681 12:41:09 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:43.681 12:41:09 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:43.681 12:41:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.681 12:41:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.682 12:41:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.682 12:41:09 -- paths/export.sh@5 -- # export PATH 00:02:43.682 12:41:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.682 12:41:09 -- nvmf/common.sh@51 -- # : 0 00:02:43.682 12:41:09 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:43.682 12:41:09 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:43.682 12:41:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:43.682 12:41:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:43.682 12:41:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:43.682 12:41:09 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:43.682 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:43.682 12:41:09 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:43.682 12:41:09 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:43.682 12:41:09 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:43.682 12:41:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:43.682 12:41:09 -- spdk/autotest.sh@32 -- # uname -s 00:02:43.682 12:41:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:43.682 12:41:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:43.682 12:41:09 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:43.682 12:41:09 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:43.682 12:41:09 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:43.682 12:41:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:43.682 12:41:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:43.682 12:41:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:43.682 12:41:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:43.682 12:41:09 -- spdk/autotest.sh@48 -- # udevadm_pid=3940122 00:02:43.682 12:41:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:43.682 12:41:09 -- pm/common@17 -- # local monitor 00:02:43.682 12:41:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.682 12:41:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.682 12:41:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.682 12:41:09 -- pm/common@21 -- # date +%s 00:02:43.682 12:41:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.682 12:41:09 -- pm/common@21 -- # date +%s 00:02:43.682 12:41:09 -- pm/common@25 -- # sleep 1 00:02:43.682 12:41:09 -- pm/common@21 -- # date +%s 00:02:43.682 12:41:09 -- pm/common@21 -- # date +%s 00:02:43.682 12:41:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732707669 00:02:43.682 12:41:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732707669 00:02:43.682 12:41:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732707669 00:02:43.682 12:41:09 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732707669 00:02:43.682 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732707669_collect-vmstat.pm.log 00:02:43.682 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732707669_collect-cpu-load.pm.log 00:02:43.682 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732707669_collect-cpu-temp.pm.log 00:02:43.682 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732707669_collect-bmc-pm.bmc.pm.log 00:02:44.617 12:41:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:44.617 12:41:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:44.617 12:41:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:44.617 12:41:10 -- common/autotest_common.sh@10 -- # set +x 00:02:44.617 12:41:10 -- spdk/autotest.sh@59 -- # create_test_list 00:02:44.617 12:41:10 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:44.617 12:41:10 -- common/autotest_common.sh@10 -- # set +x 00:02:44.617 12:41:10 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:44.617 12:41:10 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:44.617 12:41:10 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:44.617 12:41:10 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:44.617 12:41:10 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:44.617 12:41:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:44.875 12:41:11 -- common/autotest_common.sh@1457 -- # uname 00:02:44.875 12:41:11 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:44.875 12:41:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:44.875 12:41:11 -- common/autotest_common.sh@1477 -- # uname 00:02:44.875 12:41:11 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:44.875 12:41:11 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:44.875 12:41:11 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:44.875 lcov: LCOV version 1.15 00:02:44.875 12:41:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:02.963 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:09.676 12:41:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:09.676 12:41:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:09.676 12:41:35 -- common/autotest_common.sh@10 -- # set +x 00:03:09.676 12:41:35 -- spdk/autotest.sh@78 -- # rm -f 00:03:09.676 12:41:35 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.905 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:13.905 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:13.905 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:13.905 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:13.905 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:13.905 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:13.905 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:13.905 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:13.905 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:13.905 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:13.905 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:13.905 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:13.905 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:13.905 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:13.905 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:13.905 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:13.905 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:13.905 12:41:40 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:13.905 12:41:40 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:13.905 12:41:40 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:13.905 12:41:40 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:13.905 12:41:40 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:13.905 12:41:40 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:13.905 12:41:40 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:13.905 12:41:40 -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:03:13.905 12:41:40 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:13.905 12:41:40 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:13.905 12:41:40 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:13.905 12:41:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:13.905 12:41:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:13.905 12:41:40 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:13.905 12:41:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:13.905 12:41:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:13.905 12:41:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:13.905 12:41:40 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:13.905 12:41:40 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:13.905 No valid GPT data, bailing 00:03:13.905 12:41:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:13.905 12:41:40 -- scripts/common.sh@394 -- # pt= 00:03:13.905 12:41:40 -- scripts/common.sh@395 -- # return 1 00:03:13.905 12:41:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:13.905 1+0 records in 00:03:13.905 1+0 records out 00:03:13.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00182499 s, 575 MB/s 00:03:13.905 12:41:40 -- spdk/autotest.sh@105 -- # sync 00:03:13.905 12:41:40 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:13.905 12:41:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:13.905 12:41:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:22.047 12:41:47 -- spdk/autotest.sh@111 -- # uname -s 00:03:22.047 12:41:47 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:22.047 12:41:47 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:22.047 12:41:47 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:25.343 Hugepages 00:03:25.343 node hugesize free / total 00:03:25.343 node0 1048576kB 0 / 0 00:03:25.343 node0 2048kB 0 / 0 00:03:25.343 node1 1048576kB 0 / 0 00:03:25.343 node1 2048kB 0 / 0 00:03:25.343 00:03:25.343 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:25.343 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:25.601 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:25.601 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:25.601 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:25.601 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:25.601 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:25.601 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:25.601 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:25.601 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:25.601 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:25.601 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:25.601 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:25.601 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:25.601 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:25.601 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:25.601 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:25.601 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:25.601 12:41:51 -- spdk/autotest.sh@117 -- # uname -s 00:03:25.601 12:41:51 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:25.601 12:41:51 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:25.601 12:41:51 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:29.788 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:29.788 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:29.788 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:29.788 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:29.788 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:29.788 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:29.788 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:29.788 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:29.788 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:29.788 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:29.788 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:29.788 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:29.788 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:29.788 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:29.788 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:29.788 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:32.317 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:32.317 12:41:58 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:32.882 12:41:59 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:32.882 12:41:59 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:32.882 12:41:59 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:32.882 12:41:59 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:32.882 12:41:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:32.882 12:41:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:32.882 12:41:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:32.882 12:41:59 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:32.882 12:41:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:33.139 12:41:59 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:33.139 12:41:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:03:33.139 12:41:59 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.341 Waiting for block devices as requested 00:03:37.341 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:37.341 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:37.341 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:37.341 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:37.341 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:37.341 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:37.598 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:37.598 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:37.598 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:37.856 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:37.856 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:37.856 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:38.114 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:38.114 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:38.114 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:38.372 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:38.372 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:03:38.631 12:42:04 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:38.631 12:42:04 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:03:38.631 12:42:04 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:38.631 12:42:04 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:03:38.631 12:42:04 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:03:38.631 12:42:04 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:03:38.631 12:42:04 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:03:38.631 12:42:04 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:38.631 12:42:04 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:38.631 12:42:04 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:38.631 12:42:04 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:38.631 12:42:04 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:38.631 12:42:04 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:38.631 12:42:04 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:38.631 12:42:04 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:38.631 12:42:04 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:38.631 12:42:04 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:38.631 12:42:04 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:38.631 12:42:04 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:38.631 12:42:04 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:38.631 12:42:04 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:38.631 12:42:04 -- common/autotest_common.sh@1543 -- # continue 00:03:38.631 12:42:04 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:38.631 12:42:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:38.631 12:42:04 -- common/autotest_common.sh@10 -- # set +x 00:03:38.631 12:42:04 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:38.631 12:42:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:38.631 12:42:04 -- common/autotest_common.sh@10 -- # set +x 00:03:38.631 12:42:04 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:42.814 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:42.814 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:42.814 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:42.814 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:42.814 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:42.814 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:42.814 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:42.814 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:42.814 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:42.814 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:42.814 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:42.814 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:42.814 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:42.814 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:42.814 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:42.814 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:44.720 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:44.978 12:42:11 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:44.978 12:42:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:44.978 12:42:11 -- common/autotest_common.sh@10 -- # set +x 00:03:44.978 12:42:11 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:44.978 12:42:11 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:44.978 12:42:11 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:44.978 12:42:11 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:44.978 12:42:11 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:44.978 12:42:11 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:44.978 12:42:11 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:44.978 12:42:11 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:44.978 12:42:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:44.978 12:42:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:44.978 12:42:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:44.978 12:42:11 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:44.978 12:42:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:44.978 12:42:11 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:44.978 12:42:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:03:44.978 12:42:11 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:44.978 12:42:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:03:44.979 12:42:11 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:44.979 12:42:11 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:44.979 12:42:11 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:44.979 12:42:11 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:44.979 12:42:11 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:03:44.979 12:42:11 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:03:44.979 12:42:11 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3958004 00:03:44.979 12:42:11 -- common/autotest_common.sh@1585 -- # waitforlisten 3958004 00:03:44.979 12:42:11 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:44.979 12:42:11 -- common/autotest_common.sh@835 -- # '[' -z 3958004 ']' 00:03:44.979 12:42:11 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:44.979 12:42:11 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:44.979 12:42:11 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:44.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:44.979 12:42:11 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:44.979 12:42:11 -- common/autotest_common.sh@10 -- # set +x 00:03:45.236 [2024-11-27 12:42:11.406982] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:03:45.237 [2024-11-27 12:42:11.407034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3958004 ] 00:03:45.237 [2024-11-27 12:42:11.495226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.237 [2024-11-27 12:42:11.536402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.170 12:42:12 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:46.170 12:42:12 -- common/autotest_common.sh@868 -- # return 0 00:03:46.170 12:42:12 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:46.170 12:42:12 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:46.170 12:42:12 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:03:49.453 nvme0n1 00:03:49.453 12:42:15 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:49.453 [2024-11-27 12:42:15.449496] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:49.453 request: 00:03:49.453 { 00:03:49.453 "nvme_ctrlr_name": "nvme0", 00:03:49.453 "password": "test", 00:03:49.453 "method": "bdev_nvme_opal_revert", 00:03:49.453 "req_id": 1 00:03:49.453 } 00:03:49.453 Got JSON-RPC error response 00:03:49.453 response: 00:03:49.453 { 00:03:49.453 "code": -32602, 00:03:49.453 "message": "Invalid parameters" 00:03:49.453 } 00:03:49.453 12:42:15 -- common/autotest_common.sh@1591 -- # true 00:03:49.453 12:42:15 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:49.453 12:42:15 -- common/autotest_common.sh@1595 -- # killprocess 3958004 00:03:49.453 12:42:15 -- common/autotest_common.sh@954 -- # '[' -z 3958004 ']' 00:03:49.453 12:42:15 -- common/autotest_common.sh@958 -- # kill -0 3958004 00:03:49.453 12:42:15 -- common/autotest_common.sh@959 -- # uname 00:03:49.453 12:42:15 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:49.453 12:42:15 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3958004 00:03:49.453 12:42:15 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:49.453 12:42:15 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:49.453 12:42:15 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3958004' 00:03:49.453 killing process with pid 3958004 00:03:49.453 12:42:15 -- common/autotest_common.sh@973 -- # kill 3958004 00:03:49.453 12:42:15 -- common/autotest_common.sh@978 -- # wait 3958004 00:03:51.979 12:42:18 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:51.979 12:42:18 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:51.979 12:42:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:51.979 12:42:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:51.979 12:42:18 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:51.979 12:42:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.979 12:42:18 -- common/autotest_common.sh@10 -- # set +x 00:03:51.979 12:42:18 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:51.979 12:42:18 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:51.979 12:42:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.979 12:42:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.979 12:42:18 -- common/autotest_common.sh@10 -- # set +x 00:03:51.979 ************************************ 00:03:51.979 START TEST env 00:03:51.979 ************************************ 00:03:51.979 12:42:18 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:51.979 * Looking for test storage... 00:03:51.979 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:03:51.979 12:42:18 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:51.979 12:42:18 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:51.979 12:42:18 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:51.979 12:42:18 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:51.979 12:42:18 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.979 12:42:18 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.979 12:42:18 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.979 12:42:18 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.979 12:42:18 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.979 12:42:18 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.979 12:42:18 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.979 12:42:18 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.979 12:42:18 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.979 12:42:18 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.979 12:42:18 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.979 12:42:18 env -- scripts/common.sh@344 -- # case "$op" in 00:03:51.979 12:42:18 env -- scripts/common.sh@345 -- # : 1 00:03:51.979 12:42:18 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.979 12:42:18 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.979 12:42:18 env -- scripts/common.sh@365 -- # decimal 1 00:03:51.980 12:42:18 env -- scripts/common.sh@353 -- # local d=1 00:03:51.980 12:42:18 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.980 12:42:18 env -- scripts/common.sh@355 -- # echo 1 00:03:51.980 12:42:18 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.980 12:42:18 env -- scripts/common.sh@366 -- # decimal 2 00:03:51.980 12:42:18 env -- scripts/common.sh@353 -- # local d=2 00:03:51.980 12:42:18 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.980 12:42:18 env -- scripts/common.sh@355 -- # echo 2 00:03:51.980 12:42:18 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.980 12:42:18 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.980 12:42:18 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.980 12:42:18 env -- scripts/common.sh@368 -- # return 0 00:03:51.980 12:42:18 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.980 12:42:18 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:51.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.980 --rc genhtml_branch_coverage=1 00:03:51.980 --rc genhtml_function_coverage=1 00:03:51.980 --rc genhtml_legend=1 00:03:51.980 --rc geninfo_all_blocks=1 00:03:51.980 --rc geninfo_unexecuted_blocks=1 00:03:51.980 00:03:51.980 ' 00:03:51.980 12:42:18 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:51.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.980 --rc genhtml_branch_coverage=1 00:03:51.980 --rc genhtml_function_coverage=1 00:03:51.980 --rc genhtml_legend=1 00:03:51.980 --rc geninfo_all_blocks=1 00:03:51.980 --rc geninfo_unexecuted_blocks=1 00:03:51.980 00:03:51.980 ' 00:03:51.980 12:42:18 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:51.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.980 --rc genhtml_branch_coverage=1 00:03:51.980 --rc genhtml_function_coverage=1 00:03:51.980 --rc genhtml_legend=1 00:03:51.980 --rc geninfo_all_blocks=1 00:03:51.980 --rc geninfo_unexecuted_blocks=1 00:03:51.980 00:03:51.980 ' 00:03:51.980 12:42:18 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:51.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.980 --rc genhtml_branch_coverage=1 00:03:51.980 --rc genhtml_function_coverage=1 00:03:51.980 --rc genhtml_legend=1 00:03:51.980 --rc geninfo_all_blocks=1 00:03:51.980 --rc geninfo_unexecuted_blocks=1 00:03:51.980 00:03:51.980 ' 00:03:51.980 12:42:18 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:51.980 12:42:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.980 12:42:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.980 12:42:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.980 ************************************ 00:03:51.980 START TEST env_memory 00:03:51.980 ************************************ 00:03:51.980 12:42:18 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:52.238 00:03:52.238 00:03:52.238 CUnit - A unit testing framework for C - Version 2.1-3 00:03:52.238 http://cunit.sourceforge.net/ 00:03:52.238 00:03:52.238 00:03:52.238 Suite: memory 00:03:52.238 Test: alloc and free memory map ...[2024-11-27 12:42:18.405059] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:52.238 passed 00:03:52.238 Test: mem map translation ...[2024-11-27 12:42:18.423515] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:52.238 [2024-11-27 12:42:18.423530] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:52.238 [2024-11-27 12:42:18.423563] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:52.238 [2024-11-27 12:42:18.423572] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:52.238 passed 00:03:52.238 Test: mem map registration ...[2024-11-27 12:42:18.458457] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:52.238 [2024-11-27 12:42:18.458481] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:52.238 passed 00:03:52.238 Test: mem map adjacent registrations ...passed 00:03:52.238 00:03:52.238 Run Summary: Type Total Ran Passed Failed Inactive 00:03:52.238 suites 1 1 n/a 0 0 00:03:52.238 tests 4 4 4 0 0 00:03:52.238 asserts 152 152 152 0 n/a 00:03:52.238 00:03:52.238 Elapsed time = 0.130 seconds 00:03:52.238 00:03:52.238 real 0m0.144s 00:03:52.238 user 0m0.137s 00:03:52.238 sys 0m0.007s 00:03:52.238 12:42:18 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.238 12:42:18 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:52.238 ************************************ 00:03:52.238 END TEST env_memory 00:03:52.238 ************************************ 00:03:52.238 12:42:18 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:52.238 12:42:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.239 12:42:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.239 12:42:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:52.239 ************************************ 00:03:52.239 START TEST env_vtophys 00:03:52.239 ************************************ 00:03:52.239 12:42:18 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:52.239 EAL: lib.eal log level changed from notice to debug 00:03:52.239 EAL: Detected lcore 0 as core 0 on socket 0 00:03:52.239 EAL: Detected lcore 1 as core 1 on socket 0 00:03:52.239 EAL: Detected lcore 2 as core 2 on socket 0 00:03:52.239 EAL: Detected lcore 3 as core 3 on socket 0 00:03:52.239 EAL: Detected lcore 4 as core 4 on socket 0 00:03:52.239 EAL: Detected lcore 5 as core 5 on socket 0 00:03:52.239 EAL: Detected lcore 6 as core 6 on socket 0 00:03:52.239 EAL: Detected lcore 7 as core 8 on socket 0 00:03:52.239 EAL: Detected lcore 8 as core 9 on socket 0 00:03:52.239 EAL: Detected lcore 9 as core 10 on socket 0 00:03:52.239 EAL: Detected lcore 10 as core 11 on socket 0 00:03:52.239 EAL: Detected lcore 11 as core 12 on socket 0 00:03:52.239 EAL: Detected lcore 12 as core 13 on socket 0 00:03:52.239 EAL: Detected lcore 13 as core 14 on socket 0 00:03:52.239 EAL: Detected lcore 14 as core 16 on socket 0 00:03:52.239 EAL: Detected lcore 15 as core 17 on socket 0 00:03:52.239 EAL: Detected lcore 16 as core 18 on socket 0 00:03:52.239 EAL: Detected lcore 17 as core 19 on socket 0 00:03:52.239 EAL: Detected lcore 18 as core 20 on socket 0 00:03:52.239 EAL: Detected lcore 19 as core 21 on socket 0 00:03:52.239 EAL: Detected lcore 20 as core 22 on socket 0 00:03:52.239 EAL: Detected lcore 21 as core 24 on socket 0 00:03:52.239 EAL: Detected lcore 22 as core 25 on socket 0 00:03:52.239 EAL: Detected lcore 23 as core 26 on socket 0 00:03:52.239 EAL: Detected lcore 24 as core 27 on socket 0 00:03:52.239 EAL: Detected lcore 25 as core 28 on socket 0 00:03:52.239 EAL: Detected lcore 26 as core 29 on socket 0 00:03:52.239 EAL: Detected lcore 27 as core 30 on socket 0 00:03:52.239 EAL: Detected lcore 28 as core 0 on socket 1 00:03:52.239 EAL: Detected lcore 29 as core 1 on socket 1 00:03:52.239 EAL: Detected lcore 30 as core 2 on socket 1 00:03:52.239 EAL: Detected lcore 31 as core 3 on socket 1 00:03:52.239 EAL: Detected lcore 32 as core 4 on socket 1 00:03:52.239 EAL: Detected lcore 33 as core 5 on socket 1 00:03:52.239 EAL: Detected lcore 34 as core 6 on socket 1 00:03:52.239 EAL: Detected lcore 35 as core 8 on socket 1 00:03:52.239 EAL: Detected lcore 36 as core 9 on socket 1 00:03:52.239 EAL: Detected lcore 37 as core 10 on socket 1 00:03:52.239 EAL: Detected lcore 38 as core 11 on socket 1 00:03:52.239 EAL: Detected lcore 39 as core 12 on socket 1 00:03:52.239 EAL: Detected lcore 40 as core 13 on socket 1 00:03:52.239 EAL: Detected lcore 41 as core 14 on socket 1 00:03:52.239 EAL: Detected lcore 42 as core 16 on socket 1 00:03:52.239 EAL: Detected lcore 43 as core 17 on socket 1 00:03:52.239 EAL: Detected lcore 44 as core 18 on socket 1 00:03:52.239 EAL: Detected lcore 45 as core 19 on socket 1 00:03:52.239 EAL: Detected lcore 46 as core 20 on socket 1 00:03:52.239 EAL: Detected lcore 47 as core 21 on socket 1 00:03:52.239 EAL: Detected lcore 48 as core 22 on socket 1 00:03:52.239 EAL: Detected lcore 49 as core 24 on socket 1 00:03:52.239 EAL: Detected lcore 50 as core 25 on socket 1 00:03:52.239 EAL: Detected lcore 51 as core 26 on socket 1 00:03:52.239 EAL: Detected lcore 52 as core 27 on socket 1 00:03:52.239 EAL: Detected lcore 53 as core 28 on socket 1 00:03:52.239 EAL: Detected lcore 54 as core 29 on socket 1 00:03:52.239 EAL: Detected lcore 55 as core 30 on socket 1 00:03:52.239 EAL: Detected lcore 56 as core 0 on socket 0 00:03:52.239 EAL: Detected lcore 57 as core 1 on socket 0 00:03:52.239 EAL: Detected lcore 58 as core 2 on socket 0 00:03:52.239 EAL: Detected lcore 59 as core 3 on socket 0 00:03:52.239 EAL: Detected lcore 60 as core 4 on socket 0 00:03:52.239 EAL: Detected lcore 61 as core 5 on socket 0 00:03:52.239 EAL: Detected lcore 62 as core 6 on socket 0 00:03:52.239 EAL: Detected lcore 63 as core 8 on socket 0 00:03:52.239 EAL: Detected lcore 64 as core 9 on socket 0 00:03:52.239 EAL: Detected lcore 65 as core 10 on socket 0 00:03:52.239 EAL: Detected lcore 66 as core 11 on socket 0 00:03:52.239 EAL: Detected lcore 67 as core 12 on socket 0 00:03:52.239 EAL: Detected lcore 68 as core 13 on socket 0 00:03:52.239 EAL: Detected lcore 69 as core 14 on socket 0 00:03:52.239 EAL: Detected lcore 70 as core 16 on socket 0 00:03:52.239 EAL: Detected lcore 71 as core 17 on socket 0 00:03:52.239 EAL: Detected lcore 72 as core 18 on socket 0 00:03:52.239 EAL: Detected lcore 73 as core 19 on socket 0 00:03:52.239 EAL: Detected lcore 74 as core 20 on socket 0 00:03:52.239 EAL: Detected lcore 75 as core 21 on socket 0 00:03:52.239 EAL: Detected lcore 76 as core 22 on socket 0 00:03:52.239 EAL: Detected lcore 77 as core 24 on socket 0 00:03:52.239 EAL: Detected lcore 78 as core 25 on socket 0 00:03:52.239 EAL: Detected lcore 79 as core 26 on socket 0 00:03:52.239 EAL: Detected lcore 80 as core 27 on socket 0 00:03:52.239 EAL: Detected lcore 81 as core 28 on socket 0 00:03:52.239 EAL: Detected lcore 82 as core 29 on socket 0 00:03:52.239 EAL: Detected lcore 83 as core 30 on socket 0 00:03:52.239 EAL: Detected lcore 84 as core 0 on socket 1 00:03:52.239 EAL: Detected lcore 85 as core 1 on socket 1 00:03:52.239 EAL: Detected lcore 86 as core 2 on socket 1 00:03:52.239 EAL: Detected lcore 87 as core 3 on socket 1 00:03:52.239 EAL: Detected lcore 88 as core 4 on socket 1 00:03:52.239 EAL: Detected lcore 89 as core 5 on socket 1 00:03:52.239 EAL: Detected lcore 90 as core 6 on socket 1 00:03:52.239 EAL: Detected lcore 91 as core 8 on socket 1 00:03:52.239 EAL: Detected lcore 92 as core 9 on socket 1 00:03:52.239 EAL: Detected lcore 93 as core 10 on socket 1 00:03:52.239 EAL: Detected lcore 94 as core 11 on socket 1 00:03:52.239 EAL: Detected lcore 95 as core 12 on socket 1 00:03:52.239 EAL: Detected lcore 96 as core 13 on socket 1 00:03:52.239 EAL: Detected lcore 97 as core 14 on socket 1 00:03:52.239 EAL: Detected lcore 98 as core 16 on socket 1 00:03:52.239 EAL: Detected lcore 99 as core 17 on socket 1 00:03:52.239 EAL: Detected lcore 100 as core 18 on socket 1 00:03:52.239 EAL: Detected lcore 101 as core 19 on socket 1 00:03:52.239 EAL: Detected lcore 102 as core 20 on socket 1 00:03:52.239 EAL: Detected lcore 103 as core 21 on socket 1 00:03:52.239 EAL: Detected lcore 104 as core 22 on socket 1 00:03:52.239 EAL: Detected lcore 105 as core 24 on socket 1 00:03:52.239 EAL: Detected lcore 106 as core 25 on socket 1 00:03:52.239 EAL: Detected lcore 107 as core 26 on socket 1 00:03:52.239 EAL: Detected lcore 108 as core 27 on socket 1 00:03:52.239 EAL: Detected lcore 109 as core 28 on socket 1 00:03:52.239 EAL: Detected lcore 110 as core 29 on socket 1 00:03:52.239 EAL: Detected lcore 111 as core 30 on socket 1 00:03:52.239 EAL: Maximum logical cores by configuration: 128 00:03:52.239 EAL: Detected CPU lcores: 112 00:03:52.239 EAL: Detected NUMA nodes: 2 00:03:52.239 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:52.239 EAL: Detected shared linkage of DPDK 00:03:52.498 EAL: No shared files mode enabled, IPC will be disabled 00:03:52.498 EAL: Bus pci wants IOVA as 'DC' 00:03:52.498 EAL: Buses did not request a specific IOVA mode. 00:03:52.498 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:52.498 EAL: Selected IOVA mode 'VA' 00:03:52.498 EAL: Probing VFIO support... 00:03:52.498 EAL: IOMMU type 1 (Type 1) is supported 00:03:52.498 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:52.498 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:52.498 EAL: VFIO support initialized 00:03:52.498 EAL: Ask a virtual area of 0x2e000 bytes 00:03:52.498 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:52.498 EAL: Setting up physically contiguous memory... 00:03:52.498 EAL: Setting maximum number of open files to 524288 00:03:52.498 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:52.498 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:52.498 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:52.498 EAL: Ask a virtual area of 0x61000 bytes 00:03:52.498 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:52.498 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:52.498 EAL: Ask a virtual area of 0x400000000 bytes 00:03:52.498 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:52.498 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:52.498 EAL: Ask a virtual area of 0x61000 bytes 00:03:52.498 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:52.498 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:52.498 EAL: Ask a virtual area of 0x400000000 bytes 00:03:52.498 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:52.498 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:52.498 EAL: Ask a virtual area of 0x61000 bytes 00:03:52.498 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:52.498 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:52.498 EAL: Ask a virtual area of 0x400000000 bytes 00:03:52.498 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:52.498 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:52.498 EAL: Ask a virtual area of 0x61000 bytes 00:03:52.498 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:52.498 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:52.498 EAL: Ask a virtual area of 0x400000000 bytes 00:03:52.498 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:52.498 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:52.498 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:52.498 EAL: Ask a virtual area of 0x61000 bytes 00:03:52.498 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:52.498 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:52.498 EAL: Ask a virtual area of 0x400000000 bytes 00:03:52.498 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:52.498 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:52.498 EAL: Ask a virtual area of 0x61000 bytes 00:03:52.498 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:52.498 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:52.498 EAL: Ask a virtual area of 0x400000000 bytes 00:03:52.498 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:52.498 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:52.498 EAL: Ask a virtual area of 0x61000 bytes 00:03:52.498 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:52.498 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:52.498 EAL: Ask a virtual area of 0x400000000 bytes 00:03:52.498 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:52.498 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:52.498 EAL: Ask a virtual area of 0x61000 bytes 00:03:52.498 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:52.498 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:52.498 EAL: Ask a virtual area of 0x400000000 bytes 00:03:52.498 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:52.498 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:52.498 EAL: Hugepages will be freed exactly as allocated. 00:03:52.498 EAL: No shared files mode enabled, IPC is disabled 00:03:52.498 EAL: No shared files mode enabled, IPC is disabled 00:03:52.498 EAL: TSC frequency is ~2500000 KHz 00:03:52.498 EAL: Main lcore 0 is ready (tid=7fcfb4bc0a00;cpuset=[0]) 00:03:52.498 EAL: Trying to obtain current memory policy. 00:03:52.498 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.498 EAL: Restoring previous memory policy: 0 00:03:52.498 EAL: request: mp_malloc_sync 00:03:52.498 EAL: No shared files mode enabled, IPC is disabled 00:03:52.498 EAL: Heap on socket 0 was expanded by 2MB 00:03:52.498 EAL: No shared files mode enabled, IPC is disabled 00:03:52.498 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:52.498 EAL: Mem event callback 'spdk:(nil)' registered 00:03:52.498 00:03:52.498 00:03:52.498 CUnit - A unit testing framework for C - Version 2.1-3 00:03:52.498 http://cunit.sourceforge.net/ 00:03:52.498 00:03:52.498 00:03:52.498 Suite: components_suite 00:03:52.498 Test: vtophys_malloc_test ...passed 00:03:52.498 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:52.498 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.498 EAL: Restoring previous memory policy: 4 00:03:52.498 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.498 EAL: request: mp_malloc_sync 00:03:52.498 EAL: No shared files mode enabled, IPC is disabled 00:03:52.498 EAL: Heap on socket 0 was expanded by 4MB 00:03:52.498 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.498 EAL: request: mp_malloc_sync 00:03:52.498 EAL: No shared files mode enabled, IPC is disabled 00:03:52.498 EAL: Heap on socket 0 was shrunk by 4MB 00:03:52.498 EAL: Trying to obtain current memory policy. 00:03:52.498 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.498 EAL: Restoring previous memory policy: 4 00:03:52.498 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.498 EAL: request: mp_malloc_sync 00:03:52.498 EAL: No shared files mode enabled, IPC is disabled 00:03:52.498 EAL: Heap on socket 0 was expanded by 6MB 00:03:52.498 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.498 EAL: request: mp_malloc_sync 00:03:52.498 EAL: No shared files mode enabled, IPC is disabled 00:03:52.498 EAL: Heap on socket 0 was shrunk by 6MB 00:03:52.498 EAL: Trying to obtain current memory policy. 00:03:52.498 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.498 EAL: Restoring previous memory policy: 4 00:03:52.498 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.498 EAL: request: mp_malloc_sync 00:03:52.498 EAL: No shared files mode enabled, IPC is disabled 00:03:52.498 EAL: Heap on socket 0 was expanded by 10MB 00:03:52.498 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.498 EAL: request: mp_malloc_sync 00:03:52.498 EAL: No shared files mode enabled, IPC is disabled 00:03:52.499 EAL: Heap on socket 0 was shrunk by 10MB 00:03:52.499 EAL: Trying to obtain current memory policy. 00:03:52.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.499 EAL: Restoring previous memory policy: 4 00:03:52.499 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.499 EAL: request: mp_malloc_sync 00:03:52.499 EAL: No shared files mode enabled, IPC is disabled 00:03:52.499 EAL: Heap on socket 0 was expanded by 18MB 00:03:52.499 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.499 EAL: request: mp_malloc_sync 00:03:52.499 EAL: No shared files mode enabled, IPC is disabled 00:03:52.499 EAL: Heap on socket 0 was shrunk by 18MB 00:03:52.499 EAL: Trying to obtain current memory policy. 00:03:52.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.499 EAL: Restoring previous memory policy: 4 00:03:52.499 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.499 EAL: request: mp_malloc_sync 00:03:52.499 EAL: No shared files mode enabled, IPC is disabled 00:03:52.499 EAL: Heap on socket 0 was expanded by 34MB 00:03:52.499 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.499 EAL: request: mp_malloc_sync 00:03:52.499 EAL: No shared files mode enabled, IPC is disabled 00:03:52.499 EAL: Heap on socket 0 was shrunk by 34MB 00:03:52.499 EAL: Trying to obtain current memory policy. 00:03:52.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.499 EAL: Restoring previous memory policy: 4 00:03:52.499 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.499 EAL: request: mp_malloc_sync 00:03:52.499 EAL: No shared files mode enabled, IPC is disabled 00:03:52.499 EAL: Heap on socket 0 was expanded by 66MB 00:03:52.499 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.499 EAL: request: mp_malloc_sync 00:03:52.499 EAL: No shared files mode enabled, IPC is disabled 00:03:52.499 EAL: Heap on socket 0 was shrunk by 66MB 00:03:52.499 EAL: Trying to obtain current memory policy. 00:03:52.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.499 EAL: Restoring previous memory policy: 4 00:03:52.499 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.499 EAL: request: mp_malloc_sync 00:03:52.499 EAL: No shared files mode enabled, IPC is disabled 00:03:52.499 EAL: Heap on socket 0 was expanded by 130MB 00:03:52.499 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.499 EAL: request: mp_malloc_sync 00:03:52.499 EAL: No shared files mode enabled, IPC is disabled 00:03:52.499 EAL: Heap on socket 0 was shrunk by 130MB 00:03:52.499 EAL: Trying to obtain current memory policy. 00:03:52.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.499 EAL: Restoring previous memory policy: 4 00:03:52.499 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.499 EAL: request: mp_malloc_sync 00:03:52.499 EAL: No shared files mode enabled, IPC is disabled 00:03:52.499 EAL: Heap on socket 0 was expanded by 258MB 00:03:52.757 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.757 EAL: request: mp_malloc_sync 00:03:52.757 EAL: No shared files mode enabled, IPC is disabled 00:03:52.757 EAL: Heap on socket 0 was shrunk by 258MB 00:03:52.757 EAL: Trying to obtain current memory policy. 00:03:52.757 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.757 EAL: Restoring previous memory policy: 4 00:03:52.757 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.757 EAL: request: mp_malloc_sync 00:03:52.757 EAL: No shared files mode enabled, IPC is disabled 00:03:52.757 EAL: Heap on socket 0 was expanded by 514MB 00:03:52.757 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.017 EAL: request: mp_malloc_sync 00:03:53.017 EAL: No shared files mode enabled, IPC is disabled 00:03:53.017 EAL: Heap on socket 0 was shrunk by 514MB 00:03:53.017 EAL: Trying to obtain current memory policy. 00:03:53.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.017 EAL: Restoring previous memory policy: 4 00:03:53.017 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.017 EAL: request: mp_malloc_sync 00:03:53.017 EAL: No shared files mode enabled, IPC is disabled 00:03:53.017 EAL: Heap on socket 0 was expanded by 1026MB 00:03:53.274 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.532 EAL: request: mp_malloc_sync 00:03:53.532 EAL: No shared files mode enabled, IPC is disabled 00:03:53.532 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:53.532 passed 00:03:53.532 00:03:53.532 Run Summary: Type Total Ran Passed Failed Inactive 00:03:53.532 suites 1 1 n/a 0 0 00:03:53.532 tests 2 2 2 0 0 00:03:53.532 asserts 497 497 497 0 n/a 00:03:53.532 00:03:53.532 Elapsed time = 0.961 seconds 00:03:53.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.532 EAL: request: mp_malloc_sync 00:03:53.532 EAL: No shared files mode enabled, IPC is disabled 00:03:53.532 EAL: Heap on socket 0 was shrunk by 2MB 00:03:53.532 EAL: No shared files mode enabled, IPC is disabled 00:03:53.532 EAL: No shared files mode enabled, IPC is disabled 00:03:53.532 EAL: No shared files mode enabled, IPC is disabled 00:03:53.532 00:03:53.532 real 0m1.106s 00:03:53.532 user 0m0.637s 00:03:53.532 sys 0m0.443s 00:03:53.532 12:42:19 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.532 12:42:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:53.532 ************************************ 00:03:53.532 END TEST env_vtophys 00:03:53.532 ************************************ 00:03:53.532 12:42:19 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:53.532 12:42:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.532 12:42:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.532 12:42:19 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.532 ************************************ 00:03:53.532 START TEST env_pci 00:03:53.532 ************************************ 00:03:53.532 12:42:19 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:53.532 00:03:53.532 00:03:53.532 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.532 http://cunit.sourceforge.net/ 00:03:53.532 00:03:53.532 00:03:53.532 Suite: pci 00:03:53.532 Test: pci_hook ...[2024-11-27 12:42:19.782456] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3959553 has claimed it 00:03:53.532 EAL: Cannot find device (10000:00:01.0) 00:03:53.532 EAL: Failed to attach device on primary process 00:03:53.532 passed 00:03:53.532 00:03:53.532 Run Summary: Type Total Ran Passed Failed Inactive 00:03:53.532 suites 1 1 n/a 0 0 00:03:53.532 tests 1 1 1 0 0 00:03:53.532 asserts 25 25 25 0 n/a 00:03:53.532 00:03:53.532 Elapsed time = 0.033 seconds 00:03:53.532 00:03:53.532 real 0m0.044s 00:03:53.532 user 0m0.015s 00:03:53.532 sys 0m0.029s 00:03:53.532 12:42:19 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.532 12:42:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:53.532 ************************************ 00:03:53.532 END TEST env_pci 00:03:53.532 ************************************ 00:03:53.532 12:42:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:53.532 12:42:19 env -- env/env.sh@15 -- # uname 00:03:53.532 12:42:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:53.532 12:42:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:53.532 12:42:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:53.532 12:42:19 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:53.532 12:42:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.532 12:42:19 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.532 ************************************ 00:03:53.532 START TEST env_dpdk_post_init 00:03:53.532 ************************************ 00:03:53.532 12:42:19 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:53.791 EAL: Detected CPU lcores: 112 00:03:53.791 EAL: Detected NUMA nodes: 2 00:03:53.791 EAL: Detected shared linkage of DPDK 00:03:53.791 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:53.791 EAL: Selected IOVA mode 'VA' 00:03:53.791 EAL: VFIO support initialized 00:03:53.791 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:53.791 EAL: Using IOMMU type 1 (Type 1) 00:03:53.791 EAL: Ignore mapping IO port bar(1) 00:03:53.791 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:53.791 EAL: Ignore mapping IO port bar(1) 00:03:53.791 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:53.791 EAL: Ignore mapping IO port bar(1) 00:03:53.791 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:53.791 EAL: Ignore mapping IO port bar(1) 00:03:53.791 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:53.791 EAL: Ignore mapping IO port bar(1) 00:03:53.791 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:53.791 EAL: Ignore mapping IO port bar(1) 00:03:53.791 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:53.791 EAL: Ignore mapping IO port bar(1) 00:03:53.791 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:53.791 EAL: Ignore mapping IO port bar(1) 00:03:53.791 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:53.791 EAL: Ignore mapping IO port bar(1) 00:03:53.791 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:54.049 EAL: Ignore mapping IO port bar(1) 00:03:54.049 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:54.049 EAL: Ignore mapping IO port bar(1) 00:03:54.049 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:54.049 EAL: Ignore mapping IO port bar(1) 00:03:54.049 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:54.049 EAL: Ignore mapping IO port bar(1) 00:03:54.049 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:54.049 EAL: Ignore mapping IO port bar(1) 00:03:54.049 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:54.049 EAL: Ignore mapping IO port bar(1) 00:03:54.049 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:54.049 EAL: Ignore mapping IO port bar(1) 00:03:54.049 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:54.618 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:03:58.800 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:03:58.800 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:03:59.058 Starting DPDK initialization... 00:03:59.058 Starting SPDK post initialization... 00:03:59.058 SPDK NVMe probe 00:03:59.058 Attaching to 0000:d8:00.0 00:03:59.058 Attached to 0000:d8:00.0 00:03:59.058 Cleaning up... 00:03:59.058 00:03:59.058 real 0m5.361s 00:03:59.058 user 0m3.769s 00:03:59.058 sys 0m0.652s 00:03:59.058 12:42:25 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.058 12:42:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:59.058 ************************************ 00:03:59.058 END TEST env_dpdk_post_init 00:03:59.058 ************************************ 00:03:59.058 12:42:25 env -- env/env.sh@26 -- # uname 00:03:59.058 12:42:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:59.058 12:42:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:59.058 12:42:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.058 12:42:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.058 12:42:25 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.058 ************************************ 00:03:59.058 START TEST env_mem_callbacks 00:03:59.058 ************************************ 00:03:59.058 12:42:25 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:59.058 EAL: Detected CPU lcores: 112 00:03:59.058 EAL: Detected NUMA nodes: 2 00:03:59.058 EAL: Detected shared linkage of DPDK 00:03:59.058 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:59.058 EAL: Selected IOVA mode 'VA' 00:03:59.058 EAL: VFIO support initialized 00:03:59.058 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:59.058 00:03:59.058 00:03:59.058 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.058 http://cunit.sourceforge.net/ 00:03:59.058 00:03:59.058 00:03:59.058 Suite: memory 00:03:59.058 Test: test ... 00:03:59.058 register 0x200000200000 2097152 00:03:59.058 malloc 3145728 00:03:59.058 register 0x200000400000 4194304 00:03:59.058 buf 0x200000500000 len 3145728 PASSED 00:03:59.058 malloc 64 00:03:59.058 buf 0x2000004fff40 len 64 PASSED 00:03:59.058 malloc 4194304 00:03:59.058 register 0x200000800000 6291456 00:03:59.058 buf 0x200000a00000 len 4194304 PASSED 00:03:59.058 free 0x200000500000 3145728 00:03:59.058 free 0x2000004fff40 64 00:03:59.058 unregister 0x200000400000 4194304 PASSED 00:03:59.058 free 0x200000a00000 4194304 00:03:59.058 unregister 0x200000800000 6291456 PASSED 00:03:59.058 malloc 8388608 00:03:59.058 register 0x200000400000 10485760 00:03:59.058 buf 0x200000600000 len 8388608 PASSED 00:03:59.058 free 0x200000600000 8388608 00:03:59.058 unregister 0x200000400000 10485760 PASSED 00:03:59.058 passed 00:03:59.058 00:03:59.058 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.058 suites 1 1 n/a 0 0 00:03:59.058 tests 1 1 1 0 0 00:03:59.058 asserts 15 15 15 0 n/a 00:03:59.058 00:03:59.058 Elapsed time = 0.005 seconds 00:03:59.058 00:03:59.058 real 0m0.072s 00:03:59.058 user 0m0.023s 00:03:59.058 sys 0m0.049s 00:03:59.058 12:42:25 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.058 12:42:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:59.058 ************************************ 00:03:59.058 END TEST env_mem_callbacks 00:03:59.058 ************************************ 00:03:59.316 00:03:59.316 real 0m7.304s 00:03:59.316 user 0m4.820s 00:03:59.316 sys 0m1.560s 00:03:59.316 12:42:25 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.316 12:42:25 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.316 ************************************ 00:03:59.316 END TEST env 00:03:59.316 ************************************ 00:03:59.316 12:42:25 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:59.316 12:42:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.316 12:42:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.316 12:42:25 -- common/autotest_common.sh@10 -- # set +x 00:03:59.316 ************************************ 00:03:59.316 START TEST rpc 00:03:59.316 ************************************ 00:03:59.316 12:42:25 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:59.316 * Looking for test storage... 00:03:59.316 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:59.316 12:42:25 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:59.316 12:42:25 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:59.316 12:42:25 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:59.574 12:42:25 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:59.574 12:42:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.574 12:42:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.574 12:42:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.574 12:42:25 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.574 12:42:25 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.574 12:42:25 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.574 12:42:25 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.574 12:42:25 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.574 12:42:25 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.574 12:42:25 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.574 12:42:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.574 12:42:25 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:59.574 12:42:25 rpc -- scripts/common.sh@345 -- # : 1 00:03:59.574 12:42:25 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.574 12:42:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.574 12:42:25 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:59.574 12:42:25 rpc -- scripts/common.sh@353 -- # local d=1 00:03:59.574 12:42:25 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.574 12:42:25 rpc -- scripts/common.sh@355 -- # echo 1 00:03:59.574 12:42:25 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.574 12:42:25 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:59.574 12:42:25 rpc -- scripts/common.sh@353 -- # local d=2 00:03:59.574 12:42:25 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.574 12:42:25 rpc -- scripts/common.sh@355 -- # echo 2 00:03:59.574 12:42:25 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.574 12:42:25 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.574 12:42:25 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.574 12:42:25 rpc -- scripts/common.sh@368 -- # return 0 00:03:59.574 12:42:25 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.574 12:42:25 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:59.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.574 --rc genhtml_branch_coverage=1 00:03:59.574 --rc genhtml_function_coverage=1 00:03:59.574 --rc genhtml_legend=1 00:03:59.574 --rc geninfo_all_blocks=1 00:03:59.574 --rc geninfo_unexecuted_blocks=1 00:03:59.574 00:03:59.574 ' 00:03:59.574 12:42:25 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:59.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.574 --rc genhtml_branch_coverage=1 00:03:59.575 --rc genhtml_function_coverage=1 00:03:59.575 --rc genhtml_legend=1 00:03:59.575 --rc geninfo_all_blocks=1 00:03:59.575 --rc geninfo_unexecuted_blocks=1 00:03:59.575 00:03:59.575 ' 00:03:59.575 12:42:25 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:59.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.575 --rc genhtml_branch_coverage=1 00:03:59.575 --rc genhtml_function_coverage=1 00:03:59.575 --rc genhtml_legend=1 00:03:59.575 --rc geninfo_all_blocks=1 00:03:59.575 --rc geninfo_unexecuted_blocks=1 00:03:59.575 00:03:59.575 ' 00:03:59.575 12:42:25 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:59.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.575 --rc genhtml_branch_coverage=1 00:03:59.575 --rc genhtml_function_coverage=1 00:03:59.575 --rc genhtml_legend=1 00:03:59.575 --rc geninfo_all_blocks=1 00:03:59.575 --rc geninfo_unexecuted_blocks=1 00:03:59.575 00:03:59.575 ' 00:03:59.575 12:42:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3960761 00:03:59.575 12:42:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:59.575 12:42:25 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:59.575 12:42:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3960761 00:03:59.575 12:42:25 rpc -- common/autotest_common.sh@835 -- # '[' -z 3960761 ']' 00:03:59.575 12:42:25 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.575 12:42:25 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:59.575 12:42:25 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.575 12:42:25 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:59.575 12:42:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.575 [2024-11-27 12:42:25.793601] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:03:59.575 [2024-11-27 12:42:25.793653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3960761 ] 00:03:59.575 [2024-11-27 12:42:25.884066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.575 [2024-11-27 12:42:25.924480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:59.575 [2024-11-27 12:42:25.924520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3960761' to capture a snapshot of events at runtime. 00:03:59.575 [2024-11-27 12:42:25.924530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:59.575 [2024-11-27 12:42:25.924538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:59.575 [2024-11-27 12:42:25.924545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3960761 for offline analysis/debug. 00:03:59.575 [2024-11-27 12:42:25.925105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.507 12:42:26 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.507 12:42:26 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:00.507 12:42:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:00.507 12:42:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:00.507 12:42:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:00.507 12:42:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:00.507 12:42:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.507 12:42:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.507 12:42:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.507 ************************************ 00:04:00.507 START TEST rpc_integrity 00:04:00.507 ************************************ 00:04:00.507 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:00.507 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:00.507 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.507 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.507 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.507 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:00.507 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:00.508 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:00.508 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:00.508 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.508 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.508 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.508 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:00.508 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:00.508 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.508 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.508 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.508 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:00.508 { 00:04:00.508 "name": "Malloc0", 00:04:00.508 "aliases": [ 00:04:00.508 "ba1d2000-e084-4ba2-a001-804219d46d46" 00:04:00.508 ], 00:04:00.508 "product_name": "Malloc disk", 00:04:00.508 "block_size": 512, 00:04:00.508 "num_blocks": 16384, 00:04:00.508 "uuid": "ba1d2000-e084-4ba2-a001-804219d46d46", 00:04:00.508 "assigned_rate_limits": { 00:04:00.508 "rw_ios_per_sec": 0, 00:04:00.508 "rw_mbytes_per_sec": 0, 00:04:00.508 "r_mbytes_per_sec": 0, 00:04:00.508 "w_mbytes_per_sec": 0 00:04:00.508 }, 00:04:00.508 "claimed": false, 00:04:00.508 "zoned": false, 00:04:00.508 "supported_io_types": { 00:04:00.508 "read": true, 00:04:00.508 "write": true, 00:04:00.508 "unmap": true, 00:04:00.508 "flush": true, 00:04:00.508 "reset": true, 00:04:00.508 "nvme_admin": false, 00:04:00.508 "nvme_io": false, 00:04:00.508 "nvme_io_md": false, 00:04:00.508 "write_zeroes": true, 00:04:00.508 "zcopy": true, 00:04:00.508 "get_zone_info": false, 00:04:00.508 "zone_management": false, 00:04:00.508 "zone_append": false, 00:04:00.508 "compare": false, 00:04:00.508 "compare_and_write": false, 00:04:00.508 "abort": true, 00:04:00.508 "seek_hole": false, 00:04:00.508 "seek_data": false, 00:04:00.508 "copy": true, 00:04:00.508 "nvme_iov_md": false 00:04:00.508 }, 00:04:00.508 "memory_domains": [ 00:04:00.508 { 00:04:00.508 "dma_device_id": "system", 00:04:00.508 "dma_device_type": 1 00:04:00.508 }, 00:04:00.508 { 00:04:00.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.508 "dma_device_type": 2 00:04:00.508 } 00:04:00.508 ], 00:04:00.508 "driver_specific": {} 00:04:00.508 } 00:04:00.508 ]' 00:04:00.508 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:00.508 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:00.508 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:00.508 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.508 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.508 [2024-11-27 12:42:26.809886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:00.508 [2024-11-27 12:42:26.809915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:00.508 [2024-11-27 12:42:26.809928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x158f1c0 00:04:00.508 [2024-11-27 12:42:26.809936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:00.508 [2024-11-27 12:42:26.811031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:00.508 [2024-11-27 12:42:26.811054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:00.508 Passthru0 00:04:00.508 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.508 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:00.508 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.508 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.508 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.508 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:00.508 { 00:04:00.508 "name": "Malloc0", 00:04:00.508 "aliases": [ 00:04:00.508 "ba1d2000-e084-4ba2-a001-804219d46d46" 00:04:00.508 ], 00:04:00.508 "product_name": "Malloc disk", 00:04:00.508 "block_size": 512, 00:04:00.508 "num_blocks": 16384, 00:04:00.508 "uuid": "ba1d2000-e084-4ba2-a001-804219d46d46", 00:04:00.508 "assigned_rate_limits": { 00:04:00.508 "rw_ios_per_sec": 0, 00:04:00.508 "rw_mbytes_per_sec": 0, 00:04:00.508 "r_mbytes_per_sec": 0, 00:04:00.508 "w_mbytes_per_sec": 0 00:04:00.508 }, 00:04:00.508 "claimed": true, 00:04:00.508 "claim_type": "exclusive_write", 00:04:00.508 "zoned": false, 00:04:00.508 "supported_io_types": { 00:04:00.508 "read": true, 00:04:00.508 "write": true, 00:04:00.508 "unmap": true, 00:04:00.508 "flush": true, 00:04:00.508 "reset": true, 00:04:00.508 "nvme_admin": false, 00:04:00.508 "nvme_io": false, 00:04:00.508 "nvme_io_md": false, 00:04:00.508 "write_zeroes": true, 00:04:00.508 "zcopy": true, 00:04:00.508 "get_zone_info": false, 00:04:00.508 "zone_management": false, 00:04:00.508 "zone_append": false, 00:04:00.508 "compare": false, 00:04:00.508 "compare_and_write": false, 00:04:00.508 "abort": true, 00:04:00.508 "seek_hole": false, 00:04:00.508 "seek_data": false, 00:04:00.508 "copy": true, 00:04:00.508 "nvme_iov_md": false 00:04:00.508 }, 00:04:00.508 "memory_domains": [ 00:04:00.508 { 00:04:00.508 "dma_device_id": "system", 00:04:00.508 "dma_device_type": 1 00:04:00.508 }, 00:04:00.508 { 00:04:00.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.508 "dma_device_type": 2 00:04:00.508 } 00:04:00.508 ], 00:04:00.508 "driver_specific": {} 00:04:00.508 }, 00:04:00.508 { 00:04:00.508 "name": "Passthru0", 00:04:00.508 "aliases": [ 00:04:00.508 "be512b34-a073-5727-a8e9-343b113cff66" 00:04:00.508 ], 00:04:00.508 "product_name": "passthru", 00:04:00.508 "block_size": 512, 00:04:00.508 "num_blocks": 16384, 00:04:00.508 "uuid": "be512b34-a073-5727-a8e9-343b113cff66", 00:04:00.508 "assigned_rate_limits": { 00:04:00.508 "rw_ios_per_sec": 0, 00:04:00.508 "rw_mbytes_per_sec": 0, 00:04:00.508 "r_mbytes_per_sec": 0, 00:04:00.508 "w_mbytes_per_sec": 0 00:04:00.508 }, 00:04:00.508 "claimed": false, 00:04:00.508 "zoned": false, 00:04:00.508 "supported_io_types": { 00:04:00.508 "read": true, 00:04:00.508 "write": true, 00:04:00.508 "unmap": true, 00:04:00.508 "flush": true, 00:04:00.508 "reset": true, 00:04:00.508 "nvme_admin": false, 00:04:00.508 "nvme_io": false, 00:04:00.508 "nvme_io_md": false, 00:04:00.508 "write_zeroes": true, 00:04:00.508 "zcopy": true, 00:04:00.508 "get_zone_info": false, 00:04:00.508 "zone_management": false, 00:04:00.508 "zone_append": false, 00:04:00.508 "compare": false, 00:04:00.508 "compare_and_write": false, 00:04:00.508 "abort": true, 00:04:00.508 "seek_hole": false, 00:04:00.508 "seek_data": false, 00:04:00.508 "copy": true, 00:04:00.508 "nvme_iov_md": false 00:04:00.508 }, 00:04:00.508 "memory_domains": [ 00:04:00.508 { 00:04:00.508 "dma_device_id": "system", 00:04:00.508 "dma_device_type": 1 00:04:00.508 }, 00:04:00.508 { 00:04:00.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.508 "dma_device_type": 2 00:04:00.508 } 00:04:00.508 ], 00:04:00.508 "driver_specific": { 00:04:00.508 "passthru": { 00:04:00.508 "name": "Passthru0", 00:04:00.508 "base_bdev_name": "Malloc0" 00:04:00.508 } 00:04:00.508 } 00:04:00.508 } 00:04:00.508 ]' 00:04:00.508 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:00.766 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:00.766 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:00.766 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.766 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.766 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.766 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:00.766 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.766 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.766 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.766 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:00.766 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.766 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.767 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.767 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:00.767 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:00.767 12:42:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:00.767 00:04:00.767 real 0m0.288s 00:04:00.767 user 0m0.171s 00:04:00.767 sys 0m0.052s 00:04:00.767 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.767 12:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.767 ************************************ 00:04:00.767 END TEST rpc_integrity 00:04:00.767 ************************************ 00:04:00.767 12:42:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:00.767 12:42:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.767 12:42:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.767 12:42:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.767 ************************************ 00:04:00.767 START TEST rpc_plugins 00:04:00.767 ************************************ 00:04:00.767 12:42:27 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:00.767 12:42:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:00.767 12:42:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.767 12:42:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.767 12:42:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.767 12:42:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:00.767 12:42:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:00.767 12:42:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.767 12:42:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.767 12:42:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.767 12:42:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:00.767 { 00:04:00.767 "name": "Malloc1", 00:04:00.767 "aliases": [ 00:04:00.767 "06fafa52-9610-434c-aaa6-75d0e2bbca59" 00:04:00.767 ], 00:04:00.767 "product_name": "Malloc disk", 00:04:00.767 "block_size": 4096, 00:04:00.767 "num_blocks": 256, 00:04:00.767 "uuid": "06fafa52-9610-434c-aaa6-75d0e2bbca59", 00:04:00.767 "assigned_rate_limits": { 00:04:00.767 "rw_ios_per_sec": 0, 00:04:00.767 "rw_mbytes_per_sec": 0, 00:04:00.767 "r_mbytes_per_sec": 0, 00:04:00.767 "w_mbytes_per_sec": 0 00:04:00.767 }, 00:04:00.767 "claimed": false, 00:04:00.767 "zoned": false, 00:04:00.767 "supported_io_types": { 00:04:00.767 "read": true, 00:04:00.767 "write": true, 00:04:00.767 "unmap": true, 00:04:00.767 "flush": true, 00:04:00.767 "reset": true, 00:04:00.767 "nvme_admin": false, 00:04:00.767 "nvme_io": false, 00:04:00.767 "nvme_io_md": false, 00:04:00.767 "write_zeroes": true, 00:04:00.767 "zcopy": true, 00:04:00.767 "get_zone_info": false, 00:04:00.767 "zone_management": false, 00:04:00.767 "zone_append": false, 00:04:00.767 "compare": false, 00:04:00.767 "compare_and_write": false, 00:04:00.767 "abort": true, 00:04:00.767 "seek_hole": false, 00:04:00.767 "seek_data": false, 00:04:00.767 "copy": true, 00:04:00.767 "nvme_iov_md": false 00:04:00.767 }, 00:04:00.767 "memory_domains": [ 00:04:00.767 { 00:04:00.767 "dma_device_id": "system", 00:04:00.767 "dma_device_type": 1 00:04:00.767 }, 00:04:00.767 { 00:04:00.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.767 "dma_device_type": 2 00:04:00.767 } 00:04:00.767 ], 00:04:00.767 "driver_specific": {} 00:04:00.767 } 00:04:00.767 ]' 00:04:00.767 12:42:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:00.767 12:42:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:00.767 12:42:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:00.767 12:42:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.767 12:42:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.767 12:42:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.767 12:42:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:00.767 12:42:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.767 12:42:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.767 12:42:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.767 12:42:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:00.767 12:42:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:01.025 12:42:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:01.025 00:04:01.025 real 0m0.139s 00:04:01.025 user 0m0.087s 00:04:01.025 sys 0m0.018s 00:04:01.025 12:42:27 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.025 12:42:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.025 ************************************ 00:04:01.025 END TEST rpc_plugins 00:04:01.025 ************************************ 00:04:01.025 12:42:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:01.025 12:42:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.025 12:42:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.025 12:42:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.025 ************************************ 00:04:01.025 START TEST rpc_trace_cmd_test 00:04:01.025 ************************************ 00:04:01.025 12:42:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:01.025 12:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:01.025 12:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:01.025 12:42:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.025 12:42:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:01.025 12:42:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.025 12:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:01.025 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3960761", 00:04:01.025 "tpoint_group_mask": "0x8", 00:04:01.025 "iscsi_conn": { 00:04:01.025 "mask": "0x2", 00:04:01.025 "tpoint_mask": "0x0" 00:04:01.025 }, 00:04:01.025 "scsi": { 00:04:01.025 "mask": "0x4", 00:04:01.025 "tpoint_mask": "0x0" 00:04:01.025 }, 00:04:01.025 "bdev": { 00:04:01.025 "mask": "0x8", 00:04:01.025 "tpoint_mask": "0xffffffffffffffff" 00:04:01.025 }, 00:04:01.025 "nvmf_rdma": { 00:04:01.025 "mask": "0x10", 00:04:01.025 "tpoint_mask": "0x0" 00:04:01.025 }, 00:04:01.025 "nvmf_tcp": { 00:04:01.025 "mask": "0x20", 00:04:01.025 "tpoint_mask": "0x0" 00:04:01.025 }, 00:04:01.025 "ftl": { 00:04:01.025 "mask": "0x40", 00:04:01.025 "tpoint_mask": "0x0" 00:04:01.025 }, 00:04:01.025 "blobfs": { 00:04:01.025 "mask": "0x80", 00:04:01.025 "tpoint_mask": "0x0" 00:04:01.025 }, 00:04:01.025 "dsa": { 00:04:01.025 "mask": "0x200", 00:04:01.025 "tpoint_mask": "0x0" 00:04:01.025 }, 00:04:01.025 "thread": { 00:04:01.025 "mask": "0x400", 00:04:01.025 "tpoint_mask": "0x0" 00:04:01.025 }, 00:04:01.025 "nvme_pcie": { 00:04:01.025 "mask": "0x800", 00:04:01.025 "tpoint_mask": "0x0" 00:04:01.025 }, 00:04:01.025 "iaa": { 00:04:01.025 "mask": "0x1000", 00:04:01.025 "tpoint_mask": "0x0" 00:04:01.025 }, 00:04:01.025 "nvme_tcp": { 00:04:01.025 "mask": "0x2000", 00:04:01.025 "tpoint_mask": "0x0" 00:04:01.025 }, 00:04:01.025 "bdev_nvme": { 00:04:01.025 "mask": "0x4000", 00:04:01.025 "tpoint_mask": "0x0" 00:04:01.025 }, 00:04:01.025 "sock": { 00:04:01.025 "mask": "0x8000", 00:04:01.025 "tpoint_mask": "0x0" 00:04:01.025 }, 00:04:01.025 "blob": { 00:04:01.025 "mask": "0x10000", 00:04:01.025 "tpoint_mask": "0x0" 00:04:01.025 }, 00:04:01.025 "bdev_raid": { 00:04:01.025 "mask": "0x20000", 00:04:01.025 "tpoint_mask": "0x0" 00:04:01.025 }, 00:04:01.025 "scheduler": { 00:04:01.025 "mask": "0x40000", 00:04:01.025 "tpoint_mask": "0x0" 00:04:01.025 } 00:04:01.025 }' 00:04:01.025 12:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:01.025 12:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:01.025 12:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:01.025 12:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:01.025 12:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:01.025 12:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:01.025 12:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:01.284 12:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:01.284 12:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:01.284 12:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:01.284 00:04:01.284 real 0m0.181s 00:04:01.284 user 0m0.145s 00:04:01.284 sys 0m0.029s 00:04:01.284 12:42:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.284 12:42:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:01.284 ************************************ 00:04:01.284 END TEST rpc_trace_cmd_test 00:04:01.284 ************************************ 00:04:01.284 12:42:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:01.284 12:42:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:01.284 12:42:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:01.284 12:42:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.284 12:42:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.284 12:42:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.284 ************************************ 00:04:01.284 START TEST rpc_daemon_integrity 00:04:01.284 ************************************ 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:01.284 { 00:04:01.284 "name": "Malloc2", 00:04:01.284 "aliases": [ 00:04:01.284 "d0a753f0-be9a-45d6-a170-08699d086dfa" 00:04:01.284 ], 00:04:01.284 "product_name": "Malloc disk", 00:04:01.284 "block_size": 512, 00:04:01.284 "num_blocks": 16384, 00:04:01.284 "uuid": "d0a753f0-be9a-45d6-a170-08699d086dfa", 00:04:01.284 "assigned_rate_limits": { 00:04:01.284 "rw_ios_per_sec": 0, 00:04:01.284 "rw_mbytes_per_sec": 0, 00:04:01.284 "r_mbytes_per_sec": 0, 00:04:01.284 "w_mbytes_per_sec": 0 00:04:01.284 }, 00:04:01.284 "claimed": false, 00:04:01.284 "zoned": false, 00:04:01.284 "supported_io_types": { 00:04:01.284 "read": true, 00:04:01.284 "write": true, 00:04:01.284 "unmap": true, 00:04:01.284 "flush": true, 00:04:01.284 "reset": true, 00:04:01.284 "nvme_admin": false, 00:04:01.284 "nvme_io": false, 00:04:01.284 "nvme_io_md": false, 00:04:01.284 "write_zeroes": true, 00:04:01.284 "zcopy": true, 00:04:01.284 "get_zone_info": false, 00:04:01.284 "zone_management": false, 00:04:01.284 "zone_append": false, 00:04:01.284 "compare": false, 00:04:01.284 "compare_and_write": false, 00:04:01.284 "abort": true, 00:04:01.284 "seek_hole": false, 00:04:01.284 "seek_data": false, 00:04:01.284 "copy": true, 00:04:01.284 "nvme_iov_md": false 00:04:01.284 }, 00:04:01.284 "memory_domains": [ 00:04:01.284 { 00:04:01.284 "dma_device_id": "system", 00:04:01.284 "dma_device_type": 1 00:04:01.284 }, 00:04:01.284 { 00:04:01.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.284 "dma_device_type": 2 00:04:01.284 } 00:04:01.284 ], 00:04:01.284 "driver_specific": {} 00:04:01.284 } 00:04:01.284 ]' 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.284 [2024-11-27 12:42:27.652164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:01.284 [2024-11-27 12:42:27.652190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:01.284 [2024-11-27 12:42:27.652203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1590590 00:04:01.284 [2024-11-27 12:42:27.652211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:01.284 [2024-11-27 12:42:27.653239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:01.284 [2024-11-27 12:42:27.653262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:01.284 Passthru0 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.284 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:01.544 { 00:04:01.544 "name": "Malloc2", 00:04:01.544 "aliases": [ 00:04:01.544 "d0a753f0-be9a-45d6-a170-08699d086dfa" 00:04:01.544 ], 00:04:01.544 "product_name": "Malloc disk", 00:04:01.544 "block_size": 512, 00:04:01.544 "num_blocks": 16384, 00:04:01.544 "uuid": "d0a753f0-be9a-45d6-a170-08699d086dfa", 00:04:01.544 "assigned_rate_limits": { 00:04:01.544 "rw_ios_per_sec": 0, 00:04:01.544 "rw_mbytes_per_sec": 0, 00:04:01.544 "r_mbytes_per_sec": 0, 00:04:01.544 "w_mbytes_per_sec": 0 00:04:01.544 }, 00:04:01.544 "claimed": true, 00:04:01.544 "claim_type": "exclusive_write", 00:04:01.544 "zoned": false, 00:04:01.544 "supported_io_types": { 00:04:01.544 "read": true, 00:04:01.544 "write": true, 00:04:01.544 "unmap": true, 00:04:01.544 "flush": true, 00:04:01.544 "reset": true, 00:04:01.544 "nvme_admin": false, 00:04:01.544 "nvme_io": false, 00:04:01.544 "nvme_io_md": false, 00:04:01.544 "write_zeroes": true, 00:04:01.544 "zcopy": true, 00:04:01.544 "get_zone_info": false, 00:04:01.544 "zone_management": false, 00:04:01.544 "zone_append": false, 00:04:01.544 "compare": false, 00:04:01.544 "compare_and_write": false, 00:04:01.544 "abort": true, 00:04:01.544 "seek_hole": false, 00:04:01.544 "seek_data": false, 00:04:01.544 "copy": true, 00:04:01.544 "nvme_iov_md": false 00:04:01.544 }, 00:04:01.544 "memory_domains": [ 00:04:01.544 { 00:04:01.544 "dma_device_id": "system", 00:04:01.544 "dma_device_type": 1 00:04:01.544 }, 00:04:01.544 { 00:04:01.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.544 "dma_device_type": 2 00:04:01.544 } 00:04:01.544 ], 00:04:01.544 "driver_specific": {} 00:04:01.544 }, 00:04:01.544 { 00:04:01.544 "name": "Passthru0", 00:04:01.544 "aliases": [ 00:04:01.544 "64b39bd6-ecfb-5cd6-b24f-19c3bb8f64b3" 00:04:01.544 ], 00:04:01.544 "product_name": "passthru", 00:04:01.544 "block_size": 512, 00:04:01.544 "num_blocks": 16384, 00:04:01.544 "uuid": "64b39bd6-ecfb-5cd6-b24f-19c3bb8f64b3", 00:04:01.544 "assigned_rate_limits": { 00:04:01.544 "rw_ios_per_sec": 0, 00:04:01.544 "rw_mbytes_per_sec": 0, 00:04:01.544 "r_mbytes_per_sec": 0, 00:04:01.544 "w_mbytes_per_sec": 0 00:04:01.544 }, 00:04:01.544 "claimed": false, 00:04:01.544 "zoned": false, 00:04:01.544 "supported_io_types": { 00:04:01.544 "read": true, 00:04:01.544 "write": true, 00:04:01.544 "unmap": true, 00:04:01.544 "flush": true, 00:04:01.544 "reset": true, 00:04:01.544 "nvme_admin": false, 00:04:01.544 "nvme_io": false, 00:04:01.544 "nvme_io_md": false, 00:04:01.544 "write_zeroes": true, 00:04:01.544 "zcopy": true, 00:04:01.544 "get_zone_info": false, 00:04:01.544 "zone_management": false, 00:04:01.544 "zone_append": false, 00:04:01.544 "compare": false, 00:04:01.544 "compare_and_write": false, 00:04:01.544 "abort": true, 00:04:01.544 "seek_hole": false, 00:04:01.544 "seek_data": false, 00:04:01.544 "copy": true, 00:04:01.544 "nvme_iov_md": false 00:04:01.544 }, 00:04:01.544 "memory_domains": [ 00:04:01.544 { 00:04:01.544 "dma_device_id": "system", 00:04:01.544 "dma_device_type": 1 00:04:01.544 }, 00:04:01.544 { 00:04:01.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.544 "dma_device_type": 2 00:04:01.544 } 00:04:01.544 ], 00:04:01.544 "driver_specific": { 00:04:01.544 "passthru": { 00:04:01.544 "name": "Passthru0", 00:04:01.544 "base_bdev_name": "Malloc2" 00:04:01.544 } 00:04:01.544 } 00:04:01.544 } 00:04:01.544 ]' 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:01.544 00:04:01.544 real 0m0.291s 00:04:01.544 user 0m0.179s 00:04:01.544 sys 0m0.052s 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.544 12:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.544 ************************************ 00:04:01.544 END TEST rpc_daemon_integrity 00:04:01.544 ************************************ 00:04:01.544 12:42:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:01.544 12:42:27 rpc -- rpc/rpc.sh@84 -- # killprocess 3960761 00:04:01.544 12:42:27 rpc -- common/autotest_common.sh@954 -- # '[' -z 3960761 ']' 00:04:01.544 12:42:27 rpc -- common/autotest_common.sh@958 -- # kill -0 3960761 00:04:01.544 12:42:27 rpc -- common/autotest_common.sh@959 -- # uname 00:04:01.544 12:42:27 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.544 12:42:27 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3960761 00:04:01.544 12:42:27 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.544 12:42:27 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.544 12:42:27 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3960761' 00:04:01.544 killing process with pid 3960761 00:04:01.544 12:42:27 rpc -- common/autotest_common.sh@973 -- # kill 3960761 00:04:01.544 12:42:27 rpc -- common/autotest_common.sh@978 -- # wait 3960761 00:04:02.110 00:04:02.110 real 0m2.671s 00:04:02.110 user 0m3.340s 00:04:02.110 sys 0m0.861s 00:04:02.110 12:42:28 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.110 12:42:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.110 ************************************ 00:04:02.110 END TEST rpc 00:04:02.110 ************************************ 00:04:02.110 12:42:28 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:02.110 12:42:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.110 12:42:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.110 12:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:02.110 ************************************ 00:04:02.110 START TEST skip_rpc 00:04:02.110 ************************************ 00:04:02.110 12:42:28 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:02.110 * Looking for test storage... 00:04:02.110 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:02.110 12:42:28 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:02.110 12:42:28 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:02.110 12:42:28 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:02.110 12:42:28 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:02.110 12:42:28 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.111 12:42:28 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.111 12:42:28 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.111 12:42:28 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:02.111 12:42:28 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.111 12:42:28 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:02.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.111 --rc genhtml_branch_coverage=1 00:04:02.111 --rc genhtml_function_coverage=1 00:04:02.111 --rc genhtml_legend=1 00:04:02.111 --rc geninfo_all_blocks=1 00:04:02.111 --rc geninfo_unexecuted_blocks=1 00:04:02.111 00:04:02.111 ' 00:04:02.111 12:42:28 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:02.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.111 --rc genhtml_branch_coverage=1 00:04:02.111 --rc genhtml_function_coverage=1 00:04:02.111 --rc genhtml_legend=1 00:04:02.111 --rc geninfo_all_blocks=1 00:04:02.111 --rc geninfo_unexecuted_blocks=1 00:04:02.111 00:04:02.111 ' 00:04:02.111 12:42:28 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:02.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.111 --rc genhtml_branch_coverage=1 00:04:02.111 --rc genhtml_function_coverage=1 00:04:02.111 --rc genhtml_legend=1 00:04:02.111 --rc geninfo_all_blocks=1 00:04:02.111 --rc geninfo_unexecuted_blocks=1 00:04:02.111 00:04:02.111 ' 00:04:02.111 12:42:28 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:02.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.111 --rc genhtml_branch_coverage=1 00:04:02.111 --rc genhtml_function_coverage=1 00:04:02.111 --rc genhtml_legend=1 00:04:02.111 --rc geninfo_all_blocks=1 00:04:02.111 --rc geninfo_unexecuted_blocks=1 00:04:02.111 00:04:02.111 ' 00:04:02.111 12:42:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:02.111 12:42:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:02.111 12:42:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:02.111 12:42:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.111 12:42:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.111 12:42:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.369 ************************************ 00:04:02.369 START TEST skip_rpc 00:04:02.369 ************************************ 00:04:02.369 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:02.369 12:42:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3961473 00:04:02.369 12:42:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.369 12:42:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:02.369 12:42:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:02.369 [2024-11-27 12:42:28.553734] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:02.369 [2024-11-27 12:42:28.553778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3961473 ] 00:04:02.369 [2024-11-27 12:42:28.638811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.369 [2024-11-27 12:42:28.676894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3961473 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3961473 ']' 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3961473 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3961473 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3961473' 00:04:07.637 killing process with pid 3961473 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3961473 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3961473 00:04:07.637 00:04:07.637 real 0m5.363s 00:04:07.637 user 0m5.108s 00:04:07.637 sys 0m0.292s 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.637 12:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.637 ************************************ 00:04:07.637 END TEST skip_rpc 00:04:07.637 ************************************ 00:04:07.637 12:42:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:07.637 12:42:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.637 12:42:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.637 12:42:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.637 ************************************ 00:04:07.637 START TEST skip_rpc_with_json 00:04:07.637 ************************************ 00:04:07.637 12:42:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:07.637 12:42:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:07.637 12:42:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3962321 00:04:07.637 12:42:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.637 12:42:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:07.637 12:42:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3962321 00:04:07.637 12:42:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3962321 ']' 00:04:07.637 12:42:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.637 12:42:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.637 12:42:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.638 12:42:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.638 12:42:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.638 [2024-11-27 12:42:33.990875] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:07.638 [2024-11-27 12:42:33.990919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3962321 ] 00:04:07.896 [2024-11-27 12:42:34.079386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.896 [2024-11-27 12:42:34.121375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.461 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:08.461 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:08.461 12:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:08.461 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.461 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.461 [2024-11-27 12:42:34.810337] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:08.461 request: 00:04:08.461 { 00:04:08.461 "trtype": "tcp", 00:04:08.461 "method": "nvmf_get_transports", 00:04:08.461 "req_id": 1 00:04:08.461 } 00:04:08.461 Got JSON-RPC error response 00:04:08.461 response: 00:04:08.461 { 00:04:08.461 "code": -19, 00:04:08.461 "message": "No such device" 00:04:08.461 } 00:04:08.461 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:08.461 12:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:08.461 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.461 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.461 [2024-11-27 12:42:34.822451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:08.461 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.461 12:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:08.461 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.461 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.719 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.719 12:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:08.719 { 00:04:08.719 "subsystems": [ 00:04:08.719 { 00:04:08.719 "subsystem": "fsdev", 00:04:08.719 "config": [ 00:04:08.719 { 00:04:08.719 "method": "fsdev_set_opts", 00:04:08.719 "params": { 00:04:08.719 "fsdev_io_pool_size": 65535, 00:04:08.719 "fsdev_io_cache_size": 256 00:04:08.719 } 00:04:08.719 } 00:04:08.719 ] 00:04:08.719 }, 00:04:08.719 { 00:04:08.719 "subsystem": "keyring", 00:04:08.719 "config": [] 00:04:08.719 }, 00:04:08.719 { 00:04:08.719 "subsystem": "iobuf", 00:04:08.719 "config": [ 00:04:08.719 { 00:04:08.719 "method": "iobuf_set_options", 00:04:08.719 "params": { 00:04:08.719 "small_pool_count": 8192, 00:04:08.719 "large_pool_count": 1024, 00:04:08.719 "small_bufsize": 8192, 00:04:08.719 "large_bufsize": 135168, 00:04:08.719 "enable_numa": false 00:04:08.719 } 00:04:08.719 } 00:04:08.719 ] 00:04:08.719 }, 00:04:08.719 { 00:04:08.719 "subsystem": "sock", 00:04:08.719 "config": [ 00:04:08.719 { 00:04:08.719 "method": "sock_set_default_impl", 00:04:08.719 "params": { 00:04:08.719 "impl_name": "posix" 00:04:08.719 } 00:04:08.719 }, 00:04:08.719 { 00:04:08.719 "method": "sock_impl_set_options", 00:04:08.719 "params": { 00:04:08.719 "impl_name": "ssl", 00:04:08.719 "recv_buf_size": 4096, 00:04:08.719 "send_buf_size": 4096, 00:04:08.719 "enable_recv_pipe": true, 00:04:08.719 "enable_quickack": false, 00:04:08.719 "enable_placement_id": 0, 00:04:08.719 "enable_zerocopy_send_server": true, 00:04:08.719 "enable_zerocopy_send_client": false, 00:04:08.719 "zerocopy_threshold": 0, 00:04:08.719 "tls_version": 0, 00:04:08.719 "enable_ktls": false 00:04:08.719 } 00:04:08.719 }, 00:04:08.719 { 00:04:08.719 "method": "sock_impl_set_options", 00:04:08.719 "params": { 00:04:08.719 "impl_name": "posix", 00:04:08.719 "recv_buf_size": 2097152, 00:04:08.719 "send_buf_size": 2097152, 00:04:08.719 "enable_recv_pipe": true, 00:04:08.719 "enable_quickack": false, 00:04:08.719 "enable_placement_id": 0, 00:04:08.719 "enable_zerocopy_send_server": true, 00:04:08.719 "enable_zerocopy_send_client": false, 00:04:08.719 "zerocopy_threshold": 0, 00:04:08.719 "tls_version": 0, 00:04:08.719 "enable_ktls": false 00:04:08.719 } 00:04:08.719 } 00:04:08.719 ] 00:04:08.719 }, 00:04:08.719 { 00:04:08.719 "subsystem": "vmd", 00:04:08.719 "config": [] 00:04:08.719 }, 00:04:08.719 { 00:04:08.719 "subsystem": "accel", 00:04:08.719 "config": [ 00:04:08.719 { 00:04:08.719 "method": "accel_set_options", 00:04:08.719 "params": { 00:04:08.719 "small_cache_size": 128, 00:04:08.720 "large_cache_size": 16, 00:04:08.720 "task_count": 2048, 00:04:08.720 "sequence_count": 2048, 00:04:08.720 "buf_count": 2048 00:04:08.720 } 00:04:08.720 } 00:04:08.720 ] 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "subsystem": "bdev", 00:04:08.720 "config": [ 00:04:08.720 { 00:04:08.720 "method": "bdev_set_options", 00:04:08.720 "params": { 00:04:08.720 "bdev_io_pool_size": 65535, 00:04:08.720 "bdev_io_cache_size": 256, 00:04:08.720 "bdev_auto_examine": true, 00:04:08.720 "iobuf_small_cache_size": 128, 00:04:08.720 "iobuf_large_cache_size": 16 00:04:08.720 } 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "method": "bdev_raid_set_options", 00:04:08.720 "params": { 00:04:08.720 "process_window_size_kb": 1024, 00:04:08.720 "process_max_bandwidth_mb_sec": 0 00:04:08.720 } 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "method": "bdev_iscsi_set_options", 00:04:08.720 "params": { 00:04:08.720 "timeout_sec": 30 00:04:08.720 } 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "method": "bdev_nvme_set_options", 00:04:08.720 "params": { 00:04:08.720 "action_on_timeout": "none", 00:04:08.720 "timeout_us": 0, 00:04:08.720 "timeout_admin_us": 0, 00:04:08.720 "keep_alive_timeout_ms": 10000, 00:04:08.720 "arbitration_burst": 0, 00:04:08.720 "low_priority_weight": 0, 00:04:08.720 "medium_priority_weight": 0, 00:04:08.720 "high_priority_weight": 0, 00:04:08.720 "nvme_adminq_poll_period_us": 10000, 00:04:08.720 "nvme_ioq_poll_period_us": 0, 00:04:08.720 "io_queue_requests": 0, 00:04:08.720 "delay_cmd_submit": true, 00:04:08.720 "transport_retry_count": 4, 00:04:08.720 "bdev_retry_count": 3, 00:04:08.720 "transport_ack_timeout": 0, 00:04:08.720 "ctrlr_loss_timeout_sec": 0, 00:04:08.720 "reconnect_delay_sec": 0, 00:04:08.720 "fast_io_fail_timeout_sec": 0, 00:04:08.720 "disable_auto_failback": false, 00:04:08.720 "generate_uuids": false, 00:04:08.720 "transport_tos": 0, 00:04:08.720 "nvme_error_stat": false, 00:04:08.720 "rdma_srq_size": 0, 00:04:08.720 "io_path_stat": false, 00:04:08.720 "allow_accel_sequence": false, 00:04:08.720 "rdma_max_cq_size": 0, 00:04:08.720 "rdma_cm_event_timeout_ms": 0, 00:04:08.720 "dhchap_digests": [ 00:04:08.720 "sha256", 00:04:08.720 "sha384", 00:04:08.720 "sha512" 00:04:08.720 ], 00:04:08.720 "dhchap_dhgroups": [ 00:04:08.720 "null", 00:04:08.720 "ffdhe2048", 00:04:08.720 "ffdhe3072", 00:04:08.720 "ffdhe4096", 00:04:08.720 "ffdhe6144", 00:04:08.720 "ffdhe8192" 00:04:08.720 ] 00:04:08.720 } 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "method": "bdev_nvme_set_hotplug", 00:04:08.720 "params": { 00:04:08.720 "period_us": 100000, 00:04:08.720 "enable": false 00:04:08.720 } 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "method": "bdev_wait_for_examine" 00:04:08.720 } 00:04:08.720 ] 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "subsystem": "scsi", 00:04:08.720 "config": null 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "subsystem": "scheduler", 00:04:08.720 "config": [ 00:04:08.720 { 00:04:08.720 "method": "framework_set_scheduler", 00:04:08.720 "params": { 00:04:08.720 "name": "static" 00:04:08.720 } 00:04:08.720 } 00:04:08.720 ] 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "subsystem": "vhost_scsi", 00:04:08.720 "config": [] 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "subsystem": "vhost_blk", 00:04:08.720 "config": [] 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "subsystem": "ublk", 00:04:08.720 "config": [] 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "subsystem": "nbd", 00:04:08.720 "config": [] 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "subsystem": "nvmf", 00:04:08.720 "config": [ 00:04:08.720 { 00:04:08.720 "method": "nvmf_set_config", 00:04:08.720 "params": { 00:04:08.720 "discovery_filter": "match_any", 00:04:08.720 "admin_cmd_passthru": { 00:04:08.720 "identify_ctrlr": false 00:04:08.720 }, 00:04:08.720 "dhchap_digests": [ 00:04:08.720 "sha256", 00:04:08.720 "sha384", 00:04:08.720 "sha512" 00:04:08.720 ], 00:04:08.720 "dhchap_dhgroups": [ 00:04:08.720 "null", 00:04:08.720 "ffdhe2048", 00:04:08.720 "ffdhe3072", 00:04:08.720 "ffdhe4096", 00:04:08.720 "ffdhe6144", 00:04:08.720 "ffdhe8192" 00:04:08.720 ] 00:04:08.720 } 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "method": "nvmf_set_max_subsystems", 00:04:08.720 "params": { 00:04:08.720 "max_subsystems": 1024 00:04:08.720 } 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "method": "nvmf_set_crdt", 00:04:08.720 "params": { 00:04:08.720 "crdt1": 0, 00:04:08.720 "crdt2": 0, 00:04:08.720 "crdt3": 0 00:04:08.720 } 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "method": "nvmf_create_transport", 00:04:08.720 "params": { 00:04:08.720 "trtype": "TCP", 00:04:08.720 "max_queue_depth": 128, 00:04:08.720 "max_io_qpairs_per_ctrlr": 127, 00:04:08.720 "in_capsule_data_size": 4096, 00:04:08.720 "max_io_size": 131072, 00:04:08.720 "io_unit_size": 131072, 00:04:08.720 "max_aq_depth": 128, 00:04:08.720 "num_shared_buffers": 511, 00:04:08.720 "buf_cache_size": 4294967295, 00:04:08.720 "dif_insert_or_strip": false, 00:04:08.720 "zcopy": false, 00:04:08.720 "c2h_success": true, 00:04:08.720 "sock_priority": 0, 00:04:08.720 "abort_timeout_sec": 1, 00:04:08.720 "ack_timeout": 0, 00:04:08.720 "data_wr_pool_size": 0 00:04:08.720 } 00:04:08.720 } 00:04:08.720 ] 00:04:08.720 }, 00:04:08.720 { 00:04:08.720 "subsystem": "iscsi", 00:04:08.720 "config": [ 00:04:08.720 { 00:04:08.720 "method": "iscsi_set_options", 00:04:08.720 "params": { 00:04:08.720 "node_base": "iqn.2016-06.io.spdk", 00:04:08.720 "max_sessions": 128, 00:04:08.720 "max_connections_per_session": 2, 00:04:08.720 "max_queue_depth": 64, 00:04:08.720 "default_time2wait": 2, 00:04:08.720 "default_time2retain": 20, 00:04:08.720 "first_burst_length": 8192, 00:04:08.720 "immediate_data": true, 00:04:08.720 "allow_duplicated_isid": false, 00:04:08.720 "error_recovery_level": 0, 00:04:08.720 "nop_timeout": 60, 00:04:08.720 "nop_in_interval": 30, 00:04:08.720 "disable_chap": false, 00:04:08.720 "require_chap": false, 00:04:08.720 "mutual_chap": false, 00:04:08.720 "chap_group": 0, 00:04:08.720 "max_large_datain_per_connection": 64, 00:04:08.720 "max_r2t_per_connection": 4, 00:04:08.720 "pdu_pool_size": 36864, 00:04:08.720 "immediate_data_pool_size": 16384, 00:04:08.720 "data_out_pool_size": 2048 00:04:08.720 } 00:04:08.720 } 00:04:08.720 ] 00:04:08.720 } 00:04:08.720 ] 00:04:08.720 } 00:04:08.720 12:42:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:08.720 12:42:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3962321 00:04:08.720 12:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3962321 ']' 00:04:08.720 12:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3962321 00:04:08.720 12:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:08.720 12:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:08.720 12:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3962321 00:04:08.720 12:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.720 12:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.720 12:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3962321' 00:04:08.720 killing process with pid 3962321 00:04:08.720 12:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3962321 00:04:08.720 12:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3962321 00:04:08.979 12:42:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3962597 00:04:09.237 12:42:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:09.237 12:42:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3962597 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3962597 ']' 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3962597 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3962597 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3962597' 00:04:14.500 killing process with pid 3962597 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3962597 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3962597 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:14.500 00:04:14.500 real 0m6.798s 00:04:14.500 user 0m6.630s 00:04:14.500 sys 0m0.657s 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.500 ************************************ 00:04:14.500 END TEST skip_rpc_with_json 00:04:14.500 ************************************ 00:04:14.500 12:42:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:14.500 12:42:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.500 12:42:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.500 12:42:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.500 ************************************ 00:04:14.500 START TEST skip_rpc_with_delay 00:04:14.500 ************************************ 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.500 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.501 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:14.501 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:14.501 [2024-11-27 12:42:40.880580] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:14.758 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:14.758 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:14.758 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:14.758 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:14.758 00:04:14.758 real 0m0.075s 00:04:14.758 user 0m0.040s 00:04:14.758 sys 0m0.034s 00:04:14.758 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.758 12:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:14.758 ************************************ 00:04:14.758 END TEST skip_rpc_with_delay 00:04:14.758 ************************************ 00:04:14.758 12:42:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:14.758 12:42:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:14.758 12:42:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:14.758 12:42:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.758 12:42:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.759 12:42:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.759 ************************************ 00:04:14.759 START TEST exit_on_failed_rpc_init 00:04:14.759 ************************************ 00:04:14.759 12:42:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:14.759 12:42:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3963711 00:04:14.759 12:42:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:14.759 12:42:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3963711 00:04:14.759 12:42:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3963711 ']' 00:04:14.759 12:42:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.759 12:42:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.759 12:42:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.759 12:42:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.759 12:42:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.759 [2024-11-27 12:42:41.034284] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:14.759 [2024-11-27 12:42:41.034328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3963711 ] 00:04:14.759 [2024-11-27 12:42:41.122466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.016 [2024-11-27 12:42:41.161790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.581 12:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.581 12:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:15.581 12:42:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.581 12:42:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:15.581 12:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:15.581 12:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:15.581 12:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.581 12:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:15.581 12:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.581 12:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:15.581 12:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.581 12:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:15.581 12:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.581 12:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:15.581 12:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:15.581 [2024-11-27 12:42:41.897473] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:15.581 [2024-11-27 12:42:41.897523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3963721 ] 00:04:15.839 [2024-11-27 12:42:41.988386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.839 [2024-11-27 12:42:42.028444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.839 [2024-11-27 12:42:42.028504] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:15.839 [2024-11-27 12:42:42.028516] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:15.839 [2024-11-27 12:42:42.028525] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3963711 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3963711 ']' 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3963711 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3963711 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3963711' 00:04:15.839 killing process with pid 3963711 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3963711 00:04:15.839 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3963711 00:04:16.097 00:04:16.097 real 0m1.457s 00:04:16.097 user 0m1.607s 00:04:16.097 sys 0m0.482s 00:04:16.097 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.097 12:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:16.097 ************************************ 00:04:16.097 END TEST exit_on_failed_rpc_init 00:04:16.097 ************************************ 00:04:16.354 12:42:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:16.354 00:04:16.354 real 0m14.189s 00:04:16.354 user 0m13.610s 00:04:16.354 sys 0m1.782s 00:04:16.355 12:42:42 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.355 12:42:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.355 ************************************ 00:04:16.355 END TEST skip_rpc 00:04:16.355 ************************************ 00:04:16.355 12:42:42 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:16.355 12:42:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.355 12:42:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.355 12:42:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.355 ************************************ 00:04:16.355 START TEST rpc_client 00:04:16.355 ************************************ 00:04:16.355 12:42:42 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:16.355 * Looking for test storage... 00:04:16.355 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:04:16.355 12:42:42 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:16.355 12:42:42 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:16.355 12:42:42 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:16.663 12:42:42 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.663 12:42:42 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:16.663 12:42:42 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.663 12:42:42 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:16.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.663 --rc genhtml_branch_coverage=1 00:04:16.663 --rc genhtml_function_coverage=1 00:04:16.663 --rc genhtml_legend=1 00:04:16.663 --rc geninfo_all_blocks=1 00:04:16.663 --rc geninfo_unexecuted_blocks=1 00:04:16.663 00:04:16.663 ' 00:04:16.663 12:42:42 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:16.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.663 --rc genhtml_branch_coverage=1 00:04:16.663 --rc genhtml_function_coverage=1 00:04:16.663 --rc genhtml_legend=1 00:04:16.663 --rc geninfo_all_blocks=1 00:04:16.663 --rc geninfo_unexecuted_blocks=1 00:04:16.663 00:04:16.663 ' 00:04:16.663 12:42:42 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:16.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.663 --rc genhtml_branch_coverage=1 00:04:16.663 --rc genhtml_function_coverage=1 00:04:16.663 --rc genhtml_legend=1 00:04:16.663 --rc geninfo_all_blocks=1 00:04:16.663 --rc geninfo_unexecuted_blocks=1 00:04:16.663 00:04:16.663 ' 00:04:16.663 12:42:42 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:16.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.663 --rc genhtml_branch_coverage=1 00:04:16.663 --rc genhtml_function_coverage=1 00:04:16.663 --rc genhtml_legend=1 00:04:16.663 --rc geninfo_all_blocks=1 00:04:16.663 --rc geninfo_unexecuted_blocks=1 00:04:16.663 00:04:16.663 ' 00:04:16.663 12:42:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:16.663 OK 00:04:16.663 12:42:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:16.663 00:04:16.663 real 0m0.207s 00:04:16.663 user 0m0.111s 00:04:16.663 sys 0m0.113s 00:04:16.663 12:42:42 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.663 12:42:42 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:16.663 ************************************ 00:04:16.663 END TEST rpc_client 00:04:16.663 ************************************ 00:04:16.663 12:42:42 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:16.663 12:42:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.663 12:42:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.663 12:42:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.663 ************************************ 00:04:16.663 START TEST json_config 00:04:16.663 ************************************ 00:04:16.663 12:42:42 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:16.663 12:42:42 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:16.663 12:42:42 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:16.663 12:42:42 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:16.663 12:42:42 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:16.663 12:42:42 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.663 12:42:42 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.663 12:42:42 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.663 12:42:42 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.663 12:42:42 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.663 12:42:42 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.663 12:42:42 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.663 12:42:42 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.663 12:42:42 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.663 12:42:42 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.663 12:42:42 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.663 12:42:42 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:16.663 12:42:42 json_config -- scripts/common.sh@345 -- # : 1 00:04:16.663 12:42:42 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.663 12:42:42 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.663 12:42:42 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:16.663 12:42:42 json_config -- scripts/common.sh@353 -- # local d=1 00:04:16.663 12:42:42 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.663 12:42:42 json_config -- scripts/common.sh@355 -- # echo 1 00:04:16.663 12:42:42 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.663 12:42:42 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:16.663 12:42:42 json_config -- scripts/common.sh@353 -- # local d=2 00:04:16.663 12:42:42 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.663 12:42:42 json_config -- scripts/common.sh@355 -- # echo 2 00:04:16.663 12:42:42 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.663 12:42:42 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.663 12:42:42 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.663 12:42:42 json_config -- scripts/common.sh@368 -- # return 0 00:04:16.663 12:42:42 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.663 12:42:42 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:16.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.663 --rc genhtml_branch_coverage=1 00:04:16.663 --rc genhtml_function_coverage=1 00:04:16.663 --rc genhtml_legend=1 00:04:16.663 --rc geninfo_all_blocks=1 00:04:16.663 --rc geninfo_unexecuted_blocks=1 00:04:16.663 00:04:16.663 ' 00:04:16.663 12:42:42 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:16.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.663 --rc genhtml_branch_coverage=1 00:04:16.663 --rc genhtml_function_coverage=1 00:04:16.663 --rc genhtml_legend=1 00:04:16.663 --rc geninfo_all_blocks=1 00:04:16.663 --rc geninfo_unexecuted_blocks=1 00:04:16.663 00:04:16.663 ' 00:04:16.663 12:42:42 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:16.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.663 --rc genhtml_branch_coverage=1 00:04:16.664 --rc genhtml_function_coverage=1 00:04:16.664 --rc genhtml_legend=1 00:04:16.664 --rc geninfo_all_blocks=1 00:04:16.664 --rc geninfo_unexecuted_blocks=1 00:04:16.664 00:04:16.664 ' 00:04:16.664 12:42:42 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:16.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.664 --rc genhtml_branch_coverage=1 00:04:16.664 --rc genhtml_function_coverage=1 00:04:16.664 --rc genhtml_legend=1 00:04:16.664 --rc geninfo_all_blocks=1 00:04:16.664 --rc geninfo_unexecuted_blocks=1 00:04:16.664 00:04:16.664 ' 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:16.664 12:42:42 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:16.664 12:42:42 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:16.664 12:42:42 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:16.664 12:42:42 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:16.664 12:42:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.664 12:42:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.664 12:42:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.664 12:42:42 json_config -- paths/export.sh@5 -- # export PATH 00:04:16.664 12:42:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@51 -- # : 0 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:16.664 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:16.664 12:42:42 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:16.664 INFO: JSON configuration test init 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:16.664 12:42:42 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:16.664 12:42:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.664 12:42:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.664 12:42:43 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:16.664 12:42:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.664 12:42:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.664 12:42:43 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:16.664 12:42:43 json_config -- json_config/common.sh@9 -- # local app=target 00:04:16.664 12:42:43 json_config -- json_config/common.sh@10 -- # shift 00:04:16.664 12:42:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:16.664 12:42:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:16.664 12:42:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:16.664 12:42:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.664 12:42:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.664 12:42:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3964113 00:04:16.664 12:42:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:16.664 Waiting for target to run... 00:04:16.664 12:42:43 json_config -- json_config/common.sh@25 -- # waitforlisten 3964113 /var/tmp/spdk_tgt.sock 00:04:16.664 12:42:43 json_config -- common/autotest_common.sh@835 -- # '[' -z 3964113 ']' 00:04:16.664 12:42:43 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:16.664 12:42:43 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:16.664 12:42:43 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:16.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:16.664 12:42:43 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:16.664 12:42:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.664 12:42:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:16.955 [2024-11-27 12:42:43.064834] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:16.955 [2024-11-27 12:42:43.064888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3964113 ] 00:04:17.232 [2024-11-27 12:42:43.375053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.232 [2024-11-27 12:42:43.408637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.795 12:42:43 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.795 12:42:43 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:17.795 12:42:43 json_config -- json_config/common.sh@26 -- # echo '' 00:04:17.795 00:04:17.795 12:42:43 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:17.795 12:42:43 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:17.795 12:42:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.795 12:42:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.795 12:42:43 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:17.795 12:42:43 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:17.795 12:42:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:17.795 12:42:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.795 12:42:43 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:17.795 12:42:43 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:17.795 12:42:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:21.127 12:42:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.127 12:42:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:21.127 12:42:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@54 -- # sort 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:21.127 12:42:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.127 12:42:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:21.127 12:42:47 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:21.128 12:42:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.128 12:42:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.128 12:42:47 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:21.128 12:42:47 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:04:21.128 12:42:47 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:04:21.128 12:42:47 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:04:21.128 12:42:47 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:04:21.128 12:42:47 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:21.128 12:42:47 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:21.128 12:42:47 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:21.128 12:42:47 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:21.128 12:42:47 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:21.128 12:42:47 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:04:21.128 12:42:47 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:21.128 12:42:47 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:04:21.128 12:42:47 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:21.128 12:42:47 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:04:21.128 12:42:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@320 -- # e810=() 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@321 -- # x722=() 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@322 -- # mlx=() 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:04:31.096 12:42:55 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:04:31.097 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:04:31.097 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:04:31.097 Found net devices under 0000:d9:00.0: mlx_0_0 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:04:31.097 Found net devices under 0000:d9:00.1: mlx_0_1 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@62 -- # uname 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:04:31.097 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:31.097 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:04:31.097 altname enp217s0f0np0 00:04:31.097 altname ens818f0np0 00:04:31.097 inet 192.168.100.8/24 scope global mlx_0_0 00:04:31.097 valid_lft forever preferred_lft forever 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:04:31.097 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:31.097 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:04:31.097 altname enp217s0f1np1 00:04:31.097 altname ens818f1np1 00:04:31.097 inet 192.168.100.9/24 scope global mlx_0_1 00:04:31.097 valid_lft forever preferred_lft forever 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@450 -- # return 0 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:31.097 12:42:55 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:04:31.097 192.168.100.9' 00:04:31.098 12:42:55 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:04:31.098 192.168.100.9' 00:04:31.098 12:42:55 json_config -- nvmf/common.sh@485 -- # head -n 1 00:04:31.098 12:42:55 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:04:31.098 12:42:55 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:04:31.098 192.168.100.9' 00:04:31.098 12:42:55 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:04:31.098 12:42:55 json_config -- nvmf/common.sh@486 -- # head -n 1 00:04:31.098 12:42:55 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:04:31.098 12:42:55 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:04:31.098 12:42:55 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:04:31.098 12:42:55 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:04:31.098 12:42:55 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:04:31.098 12:42:55 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:04:31.098 12:42:55 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:04:31.098 12:42:55 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:31.098 12:42:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:31.098 MallocForNvmf0 00:04:31.098 12:42:56 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:31.098 12:42:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:31.098 MallocForNvmf1 00:04:31.098 12:42:56 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:04:31.098 12:42:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:04:31.098 [2024-11-27 12:42:56.459300] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:04:31.098 [2024-11-27 12:42:56.490050] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1af69f0/0x19cb4c0) succeed. 00:04:31.098 [2024-11-27 12:42:56.502233] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1af5a30/0x1a4b180) succeed. 00:04:31.098 12:42:56 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:31.098 12:42:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:31.098 12:42:56 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:31.098 12:42:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:31.098 12:42:56 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:31.098 12:42:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:31.098 12:42:57 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:31.098 12:42:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:31.098 [2024-11-27 12:42:57.226694] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:31.098 12:42:57 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:31.098 12:42:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.098 12:42:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.098 12:42:57 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:31.098 12:42:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.098 12:42:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.098 12:42:57 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:31.098 12:42:57 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.098 12:42:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.356 MallocBdevForConfigChangeCheck 00:04:31.356 12:42:57 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:31.356 12:42:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.356 12:42:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.357 12:42:57 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:31.357 12:42:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.615 12:42:57 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:31.615 INFO: shutting down applications... 00:04:31.615 12:42:57 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:31.615 12:42:57 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:31.615 12:42:57 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:31.615 12:42:57 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:34.145 Calling clear_iscsi_subsystem 00:04:34.145 Calling clear_nvmf_subsystem 00:04:34.145 Calling clear_nbd_subsystem 00:04:34.145 Calling clear_ublk_subsystem 00:04:34.145 Calling clear_vhost_blk_subsystem 00:04:34.145 Calling clear_vhost_scsi_subsystem 00:04:34.145 Calling clear_bdev_subsystem 00:04:34.145 12:43:00 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:04:34.145 12:43:00 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:34.145 12:43:00 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:34.145 12:43:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.145 12:43:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:34.145 12:43:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:34.404 12:43:00 json_config -- json_config/json_config.sh@352 -- # break 00:04:34.404 12:43:00 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:34.404 12:43:00 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:34.404 12:43:00 json_config -- json_config/common.sh@31 -- # local app=target 00:04:34.404 12:43:00 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:34.404 12:43:00 json_config -- json_config/common.sh@35 -- # [[ -n 3964113 ]] 00:04:34.404 12:43:00 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3964113 00:04:34.404 12:43:00 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:34.404 12:43:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.404 12:43:00 json_config -- json_config/common.sh@41 -- # kill -0 3964113 00:04:34.404 12:43:00 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.971 12:43:01 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.972 12:43:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.972 12:43:01 json_config -- json_config/common.sh@41 -- # kill -0 3964113 00:04:34.972 12:43:01 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:34.972 12:43:01 json_config -- json_config/common.sh@43 -- # break 00:04:34.972 12:43:01 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:34.972 12:43:01 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:34.972 SPDK target shutdown done 00:04:34.972 12:43:01 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:34.972 INFO: relaunching applications... 00:04:34.972 12:43:01 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.972 12:43:01 json_config -- json_config/common.sh@9 -- # local app=target 00:04:34.972 12:43:01 json_config -- json_config/common.sh@10 -- # shift 00:04:34.972 12:43:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:34.972 12:43:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:34.972 12:43:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:34.972 12:43:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.972 12:43:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.972 12:43:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3969962 00:04:34.972 12:43:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:34.972 Waiting for target to run... 00:04:34.972 12:43:01 json_config -- json_config/common.sh@25 -- # waitforlisten 3969962 /var/tmp/spdk_tgt.sock 00:04:34.972 12:43:01 json_config -- common/autotest_common.sh@835 -- # '[' -z 3969962 ']' 00:04:34.972 12:43:01 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:34.972 12:43:01 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.972 12:43:01 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:34.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:34.972 12:43:01 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.972 12:43:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.972 12:43:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.972 [2024-11-27 12:43:01.265155] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:34.972 [2024-11-27 12:43:01.265214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3969962 ] 00:04:35.231 [2024-11-27 12:43:01.569103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.231 [2024-11-27 12:43:01.600790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.519 [2024-11-27 12:43:04.682370] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a12440/0x1a1ef00) succeed. 00:04:38.519 [2024-11-27 12:43:04.693502] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a15690/0x1a9ef40) succeed. 00:04:38.519 [2024-11-27 12:43:04.746319] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:38.519 12:43:04 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.519 12:43:04 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:38.519 12:43:04 json_config -- json_config/common.sh@26 -- # echo '' 00:04:38.519 00:04:38.519 12:43:04 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:38.519 12:43:04 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:38.519 INFO: Checking if target configuration is the same... 00:04:38.519 12:43:04 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.519 12:43:04 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:38.519 12:43:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.519 + '[' 2 -ne 2 ']' 00:04:38.519 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:38.519 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:38.519 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:38.519 +++ basename /dev/fd/62 00:04:38.519 ++ mktemp /tmp/62.XXX 00:04:38.519 + tmp_file_1=/tmp/62.Tmx 00:04:38.519 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.519 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:38.519 + tmp_file_2=/tmp/spdk_tgt_config.json.CvD 00:04:38.519 + ret=0 00:04:38.519 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:38.777 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.036 + diff -u /tmp/62.Tmx /tmp/spdk_tgt_config.json.CvD 00:04:39.036 + echo 'INFO: JSON config files are the same' 00:04:39.036 INFO: JSON config files are the same 00:04:39.036 + rm /tmp/62.Tmx /tmp/spdk_tgt_config.json.CvD 00:04:39.036 + exit 0 00:04:39.036 12:43:05 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:39.036 12:43:05 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:39.036 INFO: changing configuration and checking if this can be detected... 00:04:39.036 12:43:05 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:39.036 12:43:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:39.036 12:43:05 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.036 12:43:05 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:39.036 12:43:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.036 + '[' 2 -ne 2 ']' 00:04:39.036 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:39.036 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:39.036 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:39.036 +++ basename /dev/fd/62 00:04:39.036 ++ mktemp /tmp/62.XXX 00:04:39.036 + tmp_file_1=/tmp/62.7q2 00:04:39.036 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.036 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:39.036 + tmp_file_2=/tmp/spdk_tgt_config.json.D1p 00:04:39.036 + ret=0 00:04:39.036 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.603 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.603 + diff -u /tmp/62.7q2 /tmp/spdk_tgt_config.json.D1p 00:04:39.603 + ret=1 00:04:39.603 + echo '=== Start of file: /tmp/62.7q2 ===' 00:04:39.603 + cat /tmp/62.7q2 00:04:39.603 + echo '=== End of file: /tmp/62.7q2 ===' 00:04:39.603 + echo '' 00:04:39.603 + echo '=== Start of file: /tmp/spdk_tgt_config.json.D1p ===' 00:04:39.603 + cat /tmp/spdk_tgt_config.json.D1p 00:04:39.603 + echo '=== End of file: /tmp/spdk_tgt_config.json.D1p ===' 00:04:39.603 + echo '' 00:04:39.603 + rm /tmp/62.7q2 /tmp/spdk_tgt_config.json.D1p 00:04:39.603 + exit 1 00:04:39.603 12:43:05 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:39.603 INFO: configuration change detected. 00:04:39.603 12:43:05 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:39.603 12:43:05 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:39.603 12:43:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:39.603 12:43:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.603 12:43:05 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:39.603 12:43:05 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:39.603 12:43:05 json_config -- json_config/json_config.sh@324 -- # [[ -n 3969962 ]] 00:04:39.603 12:43:05 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:39.603 12:43:05 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:39.603 12:43:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:39.603 12:43:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.603 12:43:05 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:39.603 12:43:05 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:39.603 12:43:05 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:39.603 12:43:05 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:39.603 12:43:05 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:39.603 12:43:05 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:39.603 12:43:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:39.603 12:43:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.603 12:43:05 json_config -- json_config/json_config.sh@330 -- # killprocess 3969962 00:04:39.603 12:43:05 json_config -- common/autotest_common.sh@954 -- # '[' -z 3969962 ']' 00:04:39.603 12:43:05 json_config -- common/autotest_common.sh@958 -- # kill -0 3969962 00:04:39.603 12:43:05 json_config -- common/autotest_common.sh@959 -- # uname 00:04:39.603 12:43:05 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.603 12:43:05 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3969962 00:04:39.603 12:43:05 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.603 12:43:05 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.604 12:43:05 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3969962' 00:04:39.604 killing process with pid 3969962 00:04:39.604 12:43:05 json_config -- common/autotest_common.sh@973 -- # kill 3969962 00:04:39.604 12:43:05 json_config -- common/autotest_common.sh@978 -- # wait 3969962 00:04:42.136 12:43:08 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.136 12:43:08 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:42.136 12:43:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:42.136 12:43:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.136 12:43:08 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:42.136 12:43:08 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:42.136 INFO: Success 00:04:42.136 12:43:08 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:04:42.136 12:43:08 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:42.136 12:43:08 json_config -- nvmf/common.sh@121 -- # sync 00:04:42.136 12:43:08 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:04:42.136 12:43:08 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:04:42.136 12:43:08 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:04:42.136 12:43:08 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:42.136 12:43:08 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:04:42.136 00:04:42.136 real 0m25.615s 00:04:42.136 user 0m28.232s 00:04:42.136 sys 0m9.181s 00:04:42.136 12:43:08 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.136 12:43:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.136 ************************************ 00:04:42.136 END TEST json_config 00:04:42.136 ************************************ 00:04:42.136 12:43:08 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:42.136 12:43:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.136 12:43:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.136 12:43:08 -- common/autotest_common.sh@10 -- # set +x 00:04:42.395 ************************************ 00:04:42.395 START TEST json_config_extra_key 00:04:42.395 ************************************ 00:04:42.395 12:43:08 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:42.395 12:43:08 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.395 12:43:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.395 12:43:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.395 12:43:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.395 12:43:08 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:42.396 12:43:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:42.396 12:43:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.396 12:43:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:42.396 12:43:08 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.396 12:43:08 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.396 12:43:08 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.396 12:43:08 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:42.396 12:43:08 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.396 12:43:08 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.396 --rc genhtml_branch_coverage=1 00:04:42.396 --rc genhtml_function_coverage=1 00:04:42.396 --rc genhtml_legend=1 00:04:42.396 --rc geninfo_all_blocks=1 00:04:42.396 --rc geninfo_unexecuted_blocks=1 00:04:42.396 00:04:42.396 ' 00:04:42.396 12:43:08 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.396 --rc genhtml_branch_coverage=1 00:04:42.396 --rc genhtml_function_coverage=1 00:04:42.396 --rc genhtml_legend=1 00:04:42.396 --rc geninfo_all_blocks=1 00:04:42.396 --rc geninfo_unexecuted_blocks=1 00:04:42.396 00:04:42.396 ' 00:04:42.396 12:43:08 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.396 --rc genhtml_branch_coverage=1 00:04:42.396 --rc genhtml_function_coverage=1 00:04:42.396 --rc genhtml_legend=1 00:04:42.396 --rc geninfo_all_blocks=1 00:04:42.396 --rc geninfo_unexecuted_blocks=1 00:04:42.396 00:04:42.396 ' 00:04:42.396 12:43:08 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.396 --rc genhtml_branch_coverage=1 00:04:42.396 --rc genhtml_function_coverage=1 00:04:42.396 --rc genhtml_legend=1 00:04:42.396 --rc geninfo_all_blocks=1 00:04:42.396 --rc geninfo_unexecuted_blocks=1 00:04:42.396 00:04:42.396 ' 00:04:42.396 12:43:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:42.396 12:43:08 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.396 12:43:08 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.396 12:43:08 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.396 12:43:08 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.396 12:43:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.396 12:43:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.396 12:43:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.396 12:43:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:42.396 12:43:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:42.396 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:42.396 12:43:08 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.396 12:43:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:42.396 12:43:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:42.396 12:43:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:42.396 12:43:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:42.396 12:43:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:42.396 12:43:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:42.396 12:43:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:42.396 12:43:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:42.396 12:43:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:42.396 12:43:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:42.396 12:43:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:42.396 INFO: launching applications... 00:04:42.396 12:43:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:42.396 12:43:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:42.396 12:43:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:42.396 12:43:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.396 12:43:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.396 12:43:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.396 12:43:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.396 12:43:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.396 12:43:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3971421 00:04:42.396 12:43:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.396 Waiting for target to run... 00:04:42.396 12:43:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3971421 /var/tmp/spdk_tgt.sock 00:04:42.396 12:43:08 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3971421 ']' 00:04:42.396 12:43:08 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.396 12:43:08 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.396 12:43:08 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.396 12:43:08 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.396 12:43:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:42.396 12:43:08 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:42.396 [2024-11-27 12:43:08.769544] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:42.396 [2024-11-27 12:43:08.769598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3971421 ] 00:04:42.982 [2024-11-27 12:43:09.069034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.982 [2024-11-27 12:43:09.101828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.240 12:43:09 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.240 12:43:09 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:43.240 12:43:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:43.240 00:04:43.240 12:43:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:43.240 INFO: shutting down applications... 00:04:43.240 12:43:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:43.240 12:43:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:43.240 12:43:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:43.240 12:43:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3971421 ]] 00:04:43.240 12:43:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3971421 00:04:43.240 12:43:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:43.240 12:43:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.240 12:43:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3971421 00:04:43.240 12:43:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.807 12:43:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.807 12:43:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.807 12:43:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3971421 00:04:43.807 12:43:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:43.807 12:43:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:43.807 12:43:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:43.807 12:43:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:43.807 SPDK target shutdown done 00:04:43.807 12:43:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:43.807 Success 00:04:43.807 00:04:43.807 real 0m1.548s 00:04:43.807 user 0m1.289s 00:04:43.807 sys 0m0.429s 00:04:43.807 12:43:10 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.807 12:43:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:43.807 ************************************ 00:04:43.807 END TEST json_config_extra_key 00:04:43.807 ************************************ 00:04:43.807 12:43:10 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.807 12:43:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.807 12:43:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.807 12:43:10 -- common/autotest_common.sh@10 -- # set +x 00:04:43.807 ************************************ 00:04:43.807 START TEST alias_rpc 00:04:43.807 ************************************ 00:04:43.807 12:43:10 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:44.065 * Looking for test storage... 00:04:44.065 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:04:44.065 12:43:10 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:44.065 12:43:10 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:44.065 12:43:10 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:44.065 12:43:10 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.065 12:43:10 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.066 12:43:10 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.066 12:43:10 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:44.066 12:43:10 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.066 12:43:10 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:44.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.066 --rc genhtml_branch_coverage=1 00:04:44.066 --rc genhtml_function_coverage=1 00:04:44.066 --rc genhtml_legend=1 00:04:44.066 --rc geninfo_all_blocks=1 00:04:44.066 --rc geninfo_unexecuted_blocks=1 00:04:44.066 00:04:44.066 ' 00:04:44.066 12:43:10 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:44.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.066 --rc genhtml_branch_coverage=1 00:04:44.066 --rc genhtml_function_coverage=1 00:04:44.066 --rc genhtml_legend=1 00:04:44.066 --rc geninfo_all_blocks=1 00:04:44.066 --rc geninfo_unexecuted_blocks=1 00:04:44.066 00:04:44.066 ' 00:04:44.066 12:43:10 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:44.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.066 --rc genhtml_branch_coverage=1 00:04:44.066 --rc genhtml_function_coverage=1 00:04:44.066 --rc genhtml_legend=1 00:04:44.066 --rc geninfo_all_blocks=1 00:04:44.066 --rc geninfo_unexecuted_blocks=1 00:04:44.066 00:04:44.066 ' 00:04:44.066 12:43:10 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:44.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.066 --rc genhtml_branch_coverage=1 00:04:44.066 --rc genhtml_function_coverage=1 00:04:44.066 --rc genhtml_legend=1 00:04:44.066 --rc geninfo_all_blocks=1 00:04:44.066 --rc geninfo_unexecuted_blocks=1 00:04:44.066 00:04:44.066 ' 00:04:44.066 12:43:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:44.066 12:43:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3971745 00:04:44.066 12:43:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.066 12:43:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3971745 00:04:44.066 12:43:10 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3971745 ']' 00:04:44.066 12:43:10 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.066 12:43:10 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.066 12:43:10 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.066 12:43:10 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.066 12:43:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.066 [2024-11-27 12:43:10.410894] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:44.066 [2024-11-27 12:43:10.410947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3971745 ] 00:04:44.325 [2024-11-27 12:43:10.497915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.325 [2024-11-27 12:43:10.538532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.892 12:43:11 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.892 12:43:11 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:44.892 12:43:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:45.151 12:43:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3971745 00:04:45.151 12:43:11 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3971745 ']' 00:04:45.151 12:43:11 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3971745 00:04:45.151 12:43:11 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:45.151 12:43:11 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.151 12:43:11 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3971745 00:04:45.151 12:43:11 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.151 12:43:11 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.151 12:43:11 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3971745' 00:04:45.151 killing process with pid 3971745 00:04:45.151 12:43:11 alias_rpc -- common/autotest_common.sh@973 -- # kill 3971745 00:04:45.151 12:43:11 alias_rpc -- common/autotest_common.sh@978 -- # wait 3971745 00:04:45.718 00:04:45.718 real 0m1.650s 00:04:45.718 user 0m1.767s 00:04:45.718 sys 0m0.501s 00:04:45.718 12:43:11 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.718 12:43:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.718 ************************************ 00:04:45.718 END TEST alias_rpc 00:04:45.718 ************************************ 00:04:45.718 12:43:11 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:45.718 12:43:11 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:45.718 12:43:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.718 12:43:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.718 12:43:11 -- common/autotest_common.sh@10 -- # set +x 00:04:45.718 ************************************ 00:04:45.718 START TEST spdkcli_tcp 00:04:45.718 ************************************ 00:04:45.718 12:43:11 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:45.718 * Looking for test storage... 00:04:45.718 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:04:45.718 12:43:11 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:45.718 12:43:11 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:45.718 12:43:11 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:45.718 12:43:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.718 12:43:12 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:45.718 12:43:12 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.718 12:43:12 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:45.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.718 --rc genhtml_branch_coverage=1 00:04:45.718 --rc genhtml_function_coverage=1 00:04:45.719 --rc genhtml_legend=1 00:04:45.719 --rc geninfo_all_blocks=1 00:04:45.719 --rc geninfo_unexecuted_blocks=1 00:04:45.719 00:04:45.719 ' 00:04:45.719 12:43:12 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:45.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.719 --rc genhtml_branch_coverage=1 00:04:45.719 --rc genhtml_function_coverage=1 00:04:45.719 --rc genhtml_legend=1 00:04:45.719 --rc geninfo_all_blocks=1 00:04:45.719 --rc geninfo_unexecuted_blocks=1 00:04:45.719 00:04:45.719 ' 00:04:45.719 12:43:12 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:45.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.719 --rc genhtml_branch_coverage=1 00:04:45.719 --rc genhtml_function_coverage=1 00:04:45.719 --rc genhtml_legend=1 00:04:45.719 --rc geninfo_all_blocks=1 00:04:45.719 --rc geninfo_unexecuted_blocks=1 00:04:45.719 00:04:45.719 ' 00:04:45.719 12:43:12 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:45.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.719 --rc genhtml_branch_coverage=1 00:04:45.719 --rc genhtml_function_coverage=1 00:04:45.719 --rc genhtml_legend=1 00:04:45.719 --rc geninfo_all_blocks=1 00:04:45.719 --rc geninfo_unexecuted_blocks=1 00:04:45.719 00:04:45.719 ' 00:04:45.719 12:43:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:04:45.719 12:43:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:45.719 12:43:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:04:45.719 12:43:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:45.719 12:43:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:45.719 12:43:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:45.719 12:43:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:45.719 12:43:12 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.719 12:43:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.719 12:43:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3972081 00:04:45.719 12:43:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3972081 00:04:45.719 12:43:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:45.719 12:43:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3972081 ']' 00:04:45.719 12:43:12 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.719 12:43:12 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.719 12:43:12 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.719 12:43:12 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.719 12:43:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.978 [2024-11-27 12:43:12.143286] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:45.978 [2024-11-27 12:43:12.143335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3972081 ] 00:04:45.978 [2024-11-27 12:43:12.231784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.978 [2024-11-27 12:43:12.276629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.978 [2024-11-27 12:43:12.276632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.913 12:43:12 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.913 12:43:12 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:46.913 12:43:12 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3972303 00:04:46.913 12:43:12 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:46.913 12:43:12 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:46.913 [ 00:04:46.913 "bdev_malloc_delete", 00:04:46.913 "bdev_malloc_create", 00:04:46.913 "bdev_null_resize", 00:04:46.913 "bdev_null_delete", 00:04:46.913 "bdev_null_create", 00:04:46.913 "bdev_nvme_cuse_unregister", 00:04:46.913 "bdev_nvme_cuse_register", 00:04:46.913 "bdev_opal_new_user", 00:04:46.913 "bdev_opal_set_lock_state", 00:04:46.913 "bdev_opal_delete", 00:04:46.913 "bdev_opal_get_info", 00:04:46.913 "bdev_opal_create", 00:04:46.913 "bdev_nvme_opal_revert", 00:04:46.913 "bdev_nvme_opal_init", 00:04:46.913 "bdev_nvme_send_cmd", 00:04:46.913 "bdev_nvme_set_keys", 00:04:46.913 "bdev_nvme_get_path_iostat", 00:04:46.913 "bdev_nvme_get_mdns_discovery_info", 00:04:46.913 "bdev_nvme_stop_mdns_discovery", 00:04:46.913 "bdev_nvme_start_mdns_discovery", 00:04:46.913 "bdev_nvme_set_multipath_policy", 00:04:46.913 "bdev_nvme_set_preferred_path", 00:04:46.913 "bdev_nvme_get_io_paths", 00:04:46.913 "bdev_nvme_remove_error_injection", 00:04:46.913 "bdev_nvme_add_error_injection", 00:04:46.913 "bdev_nvme_get_discovery_info", 00:04:46.913 "bdev_nvme_stop_discovery", 00:04:46.914 "bdev_nvme_start_discovery", 00:04:46.914 "bdev_nvme_get_controller_health_info", 00:04:46.914 "bdev_nvme_disable_controller", 00:04:46.914 "bdev_nvme_enable_controller", 00:04:46.914 "bdev_nvme_reset_controller", 00:04:46.914 "bdev_nvme_get_transport_statistics", 00:04:46.914 "bdev_nvme_apply_firmware", 00:04:46.914 "bdev_nvme_detach_controller", 00:04:46.914 "bdev_nvme_get_controllers", 00:04:46.914 "bdev_nvme_attach_controller", 00:04:46.914 "bdev_nvme_set_hotplug", 00:04:46.914 "bdev_nvme_set_options", 00:04:46.914 "bdev_passthru_delete", 00:04:46.914 "bdev_passthru_create", 00:04:46.914 "bdev_lvol_set_parent_bdev", 00:04:46.914 "bdev_lvol_set_parent", 00:04:46.914 "bdev_lvol_check_shallow_copy", 00:04:46.914 "bdev_lvol_start_shallow_copy", 00:04:46.914 "bdev_lvol_grow_lvstore", 00:04:46.914 "bdev_lvol_get_lvols", 00:04:46.914 "bdev_lvol_get_lvstores", 00:04:46.914 "bdev_lvol_delete", 00:04:46.914 "bdev_lvol_set_read_only", 00:04:46.914 "bdev_lvol_resize", 00:04:46.914 "bdev_lvol_decouple_parent", 00:04:46.914 "bdev_lvol_inflate", 00:04:46.914 "bdev_lvol_rename", 00:04:46.914 "bdev_lvol_clone_bdev", 00:04:46.914 "bdev_lvol_clone", 00:04:46.914 "bdev_lvol_snapshot", 00:04:46.914 "bdev_lvol_create", 00:04:46.914 "bdev_lvol_delete_lvstore", 00:04:46.914 "bdev_lvol_rename_lvstore", 00:04:46.914 "bdev_lvol_create_lvstore", 00:04:46.914 "bdev_raid_set_options", 00:04:46.914 "bdev_raid_remove_base_bdev", 00:04:46.914 "bdev_raid_add_base_bdev", 00:04:46.914 "bdev_raid_delete", 00:04:46.914 "bdev_raid_create", 00:04:46.914 "bdev_raid_get_bdevs", 00:04:46.914 "bdev_error_inject_error", 00:04:46.914 "bdev_error_delete", 00:04:46.914 "bdev_error_create", 00:04:46.914 "bdev_split_delete", 00:04:46.914 "bdev_split_create", 00:04:46.914 "bdev_delay_delete", 00:04:46.914 "bdev_delay_create", 00:04:46.914 "bdev_delay_update_latency", 00:04:46.914 "bdev_zone_block_delete", 00:04:46.914 "bdev_zone_block_create", 00:04:46.914 "blobfs_create", 00:04:46.914 "blobfs_detect", 00:04:46.914 "blobfs_set_cache_size", 00:04:46.914 "bdev_aio_delete", 00:04:46.914 "bdev_aio_rescan", 00:04:46.914 "bdev_aio_create", 00:04:46.914 "bdev_ftl_set_property", 00:04:46.914 "bdev_ftl_get_properties", 00:04:46.914 "bdev_ftl_get_stats", 00:04:46.914 "bdev_ftl_unmap", 00:04:46.914 "bdev_ftl_unload", 00:04:46.914 "bdev_ftl_delete", 00:04:46.914 "bdev_ftl_load", 00:04:46.914 "bdev_ftl_create", 00:04:46.914 "bdev_virtio_attach_controller", 00:04:46.914 "bdev_virtio_scsi_get_devices", 00:04:46.914 "bdev_virtio_detach_controller", 00:04:46.914 "bdev_virtio_blk_set_hotplug", 00:04:46.914 "bdev_iscsi_delete", 00:04:46.914 "bdev_iscsi_create", 00:04:46.914 "bdev_iscsi_set_options", 00:04:46.914 "accel_error_inject_error", 00:04:46.914 "ioat_scan_accel_module", 00:04:46.914 "dsa_scan_accel_module", 00:04:46.914 "iaa_scan_accel_module", 00:04:46.914 "keyring_file_remove_key", 00:04:46.914 "keyring_file_add_key", 00:04:46.914 "keyring_linux_set_options", 00:04:46.914 "fsdev_aio_delete", 00:04:46.914 "fsdev_aio_create", 00:04:46.914 "iscsi_get_histogram", 00:04:46.914 "iscsi_enable_histogram", 00:04:46.914 "iscsi_set_options", 00:04:46.914 "iscsi_get_auth_groups", 00:04:46.914 "iscsi_auth_group_remove_secret", 00:04:46.914 "iscsi_auth_group_add_secret", 00:04:46.914 "iscsi_delete_auth_group", 00:04:46.914 "iscsi_create_auth_group", 00:04:46.914 "iscsi_set_discovery_auth", 00:04:46.914 "iscsi_get_options", 00:04:46.914 "iscsi_target_node_request_logout", 00:04:46.914 "iscsi_target_node_set_redirect", 00:04:46.914 "iscsi_target_node_set_auth", 00:04:46.914 "iscsi_target_node_add_lun", 00:04:46.914 "iscsi_get_stats", 00:04:46.914 "iscsi_get_connections", 00:04:46.914 "iscsi_portal_group_set_auth", 00:04:46.914 "iscsi_start_portal_group", 00:04:46.914 "iscsi_delete_portal_group", 00:04:46.914 "iscsi_create_portal_group", 00:04:46.914 "iscsi_get_portal_groups", 00:04:46.914 "iscsi_delete_target_node", 00:04:46.914 "iscsi_target_node_remove_pg_ig_maps", 00:04:46.914 "iscsi_target_node_add_pg_ig_maps", 00:04:46.914 "iscsi_create_target_node", 00:04:46.914 "iscsi_get_target_nodes", 00:04:46.914 "iscsi_delete_initiator_group", 00:04:46.914 "iscsi_initiator_group_remove_initiators", 00:04:46.914 "iscsi_initiator_group_add_initiators", 00:04:46.914 "iscsi_create_initiator_group", 00:04:46.914 "iscsi_get_initiator_groups", 00:04:46.914 "nvmf_set_crdt", 00:04:46.914 "nvmf_set_config", 00:04:46.914 "nvmf_set_max_subsystems", 00:04:46.914 "nvmf_stop_mdns_prr", 00:04:46.914 "nvmf_publish_mdns_prr", 00:04:46.914 "nvmf_subsystem_get_listeners", 00:04:46.914 "nvmf_subsystem_get_qpairs", 00:04:46.914 "nvmf_subsystem_get_controllers", 00:04:46.914 "nvmf_get_stats", 00:04:46.914 "nvmf_get_transports", 00:04:46.914 "nvmf_create_transport", 00:04:46.914 "nvmf_get_targets", 00:04:46.914 "nvmf_delete_target", 00:04:46.914 "nvmf_create_target", 00:04:46.914 "nvmf_subsystem_allow_any_host", 00:04:46.914 "nvmf_subsystem_set_keys", 00:04:46.914 "nvmf_subsystem_remove_host", 00:04:46.914 "nvmf_subsystem_add_host", 00:04:46.914 "nvmf_ns_remove_host", 00:04:46.914 "nvmf_ns_add_host", 00:04:46.914 "nvmf_subsystem_remove_ns", 00:04:46.914 "nvmf_subsystem_set_ns_ana_group", 00:04:46.914 "nvmf_subsystem_add_ns", 00:04:46.914 "nvmf_subsystem_listener_set_ana_state", 00:04:46.914 "nvmf_discovery_get_referrals", 00:04:46.914 "nvmf_discovery_remove_referral", 00:04:46.914 "nvmf_discovery_add_referral", 00:04:46.914 "nvmf_subsystem_remove_listener", 00:04:46.914 "nvmf_subsystem_add_listener", 00:04:46.914 "nvmf_delete_subsystem", 00:04:46.914 "nvmf_create_subsystem", 00:04:46.914 "nvmf_get_subsystems", 00:04:46.914 "env_dpdk_get_mem_stats", 00:04:46.914 "nbd_get_disks", 00:04:46.914 "nbd_stop_disk", 00:04:46.914 "nbd_start_disk", 00:04:46.914 "ublk_recover_disk", 00:04:46.914 "ublk_get_disks", 00:04:46.914 "ublk_stop_disk", 00:04:46.914 "ublk_start_disk", 00:04:46.914 "ublk_destroy_target", 00:04:46.914 "ublk_create_target", 00:04:46.914 "virtio_blk_create_transport", 00:04:46.914 "virtio_blk_get_transports", 00:04:46.914 "vhost_controller_set_coalescing", 00:04:46.914 "vhost_get_controllers", 00:04:46.914 "vhost_delete_controller", 00:04:46.914 "vhost_create_blk_controller", 00:04:46.914 "vhost_scsi_controller_remove_target", 00:04:46.914 "vhost_scsi_controller_add_target", 00:04:46.914 "vhost_start_scsi_controller", 00:04:46.914 "vhost_create_scsi_controller", 00:04:46.914 "thread_set_cpumask", 00:04:46.914 "scheduler_set_options", 00:04:46.914 "framework_get_governor", 00:04:46.914 "framework_get_scheduler", 00:04:46.914 "framework_set_scheduler", 00:04:46.914 "framework_get_reactors", 00:04:46.914 "thread_get_io_channels", 00:04:46.914 "thread_get_pollers", 00:04:46.914 "thread_get_stats", 00:04:46.914 "framework_monitor_context_switch", 00:04:46.914 "spdk_kill_instance", 00:04:46.914 "log_enable_timestamps", 00:04:46.914 "log_get_flags", 00:04:46.914 "log_clear_flag", 00:04:46.914 "log_set_flag", 00:04:46.914 "log_get_level", 00:04:46.914 "log_set_level", 00:04:46.914 "log_get_print_level", 00:04:46.914 "log_set_print_level", 00:04:46.914 "framework_enable_cpumask_locks", 00:04:46.914 "framework_disable_cpumask_locks", 00:04:46.914 "framework_wait_init", 00:04:46.914 "framework_start_init", 00:04:46.914 "scsi_get_devices", 00:04:46.914 "bdev_get_histogram", 00:04:46.914 "bdev_enable_histogram", 00:04:46.914 "bdev_set_qos_limit", 00:04:46.914 "bdev_set_qd_sampling_period", 00:04:46.914 "bdev_get_bdevs", 00:04:46.914 "bdev_reset_iostat", 00:04:46.914 "bdev_get_iostat", 00:04:46.914 "bdev_examine", 00:04:46.914 "bdev_wait_for_examine", 00:04:46.914 "bdev_set_options", 00:04:46.914 "accel_get_stats", 00:04:46.914 "accel_set_options", 00:04:46.914 "accel_set_driver", 00:04:46.914 "accel_crypto_key_destroy", 00:04:46.914 "accel_crypto_keys_get", 00:04:46.914 "accel_crypto_key_create", 00:04:46.914 "accel_assign_opc", 00:04:46.914 "accel_get_module_info", 00:04:46.914 "accel_get_opc_assignments", 00:04:46.914 "vmd_rescan", 00:04:46.914 "vmd_remove_device", 00:04:46.914 "vmd_enable", 00:04:46.914 "sock_get_default_impl", 00:04:46.914 "sock_set_default_impl", 00:04:46.914 "sock_impl_set_options", 00:04:46.914 "sock_impl_get_options", 00:04:46.914 "iobuf_get_stats", 00:04:46.914 "iobuf_set_options", 00:04:46.914 "keyring_get_keys", 00:04:46.914 "framework_get_pci_devices", 00:04:46.914 "framework_get_config", 00:04:46.914 "framework_get_subsystems", 00:04:46.914 "fsdev_set_opts", 00:04:46.914 "fsdev_get_opts", 00:04:46.914 "trace_get_info", 00:04:46.914 "trace_get_tpoint_group_mask", 00:04:46.914 "trace_disable_tpoint_group", 00:04:46.914 "trace_enable_tpoint_group", 00:04:46.914 "trace_clear_tpoint_mask", 00:04:46.914 "trace_set_tpoint_mask", 00:04:46.914 "notify_get_notifications", 00:04:46.914 "notify_get_types", 00:04:46.914 "spdk_get_version", 00:04:46.914 "rpc_get_methods" 00:04:46.914 ] 00:04:46.914 12:43:13 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:46.914 12:43:13 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.914 12:43:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:46.914 12:43:13 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:46.914 12:43:13 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3972081 00:04:46.914 12:43:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3972081 ']' 00:04:46.914 12:43:13 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3972081 00:04:46.914 12:43:13 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:46.914 12:43:13 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.914 12:43:13 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3972081 00:04:46.914 12:43:13 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.914 12:43:13 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.914 12:43:13 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3972081' 00:04:46.914 killing process with pid 3972081 00:04:46.914 12:43:13 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3972081 00:04:46.914 12:43:13 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3972081 00:04:47.480 00:04:47.480 real 0m1.675s 00:04:47.480 user 0m3.013s 00:04:47.480 sys 0m0.572s 00:04:47.480 12:43:13 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.480 12:43:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.480 ************************************ 00:04:47.480 END TEST spdkcli_tcp 00:04:47.480 ************************************ 00:04:47.480 12:43:13 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:47.480 12:43:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.480 12:43:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.480 12:43:13 -- common/autotest_common.sh@10 -- # set +x 00:04:47.480 ************************************ 00:04:47.480 START TEST dpdk_mem_utility 00:04:47.480 ************************************ 00:04:47.480 12:43:13 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:47.480 * Looking for test storage... 00:04:47.480 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:04:47.480 12:43:13 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.481 12:43:13 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.481 12:43:13 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.481 12:43:13 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.481 12:43:13 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:47.481 12:43:13 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.481 12:43:13 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.481 --rc genhtml_branch_coverage=1 00:04:47.481 --rc genhtml_function_coverage=1 00:04:47.481 --rc genhtml_legend=1 00:04:47.481 --rc geninfo_all_blocks=1 00:04:47.481 --rc geninfo_unexecuted_blocks=1 00:04:47.481 00:04:47.481 ' 00:04:47.481 12:43:13 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.481 --rc genhtml_branch_coverage=1 00:04:47.481 --rc genhtml_function_coverage=1 00:04:47.481 --rc genhtml_legend=1 00:04:47.481 --rc geninfo_all_blocks=1 00:04:47.481 --rc geninfo_unexecuted_blocks=1 00:04:47.481 00:04:47.481 ' 00:04:47.481 12:43:13 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.481 --rc genhtml_branch_coverage=1 00:04:47.481 --rc genhtml_function_coverage=1 00:04:47.481 --rc genhtml_legend=1 00:04:47.481 --rc geninfo_all_blocks=1 00:04:47.481 --rc geninfo_unexecuted_blocks=1 00:04:47.481 00:04:47.481 ' 00:04:47.481 12:43:13 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.481 --rc genhtml_branch_coverage=1 00:04:47.481 --rc genhtml_function_coverage=1 00:04:47.481 --rc genhtml_legend=1 00:04:47.481 --rc geninfo_all_blocks=1 00:04:47.481 --rc geninfo_unexecuted_blocks=1 00:04:47.481 00:04:47.481 ' 00:04:47.481 12:43:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:47.481 12:43:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3972424 00:04:47.481 12:43:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3972424 00:04:47.481 12:43:13 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3972424 ']' 00:04:47.481 12:43:13 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.481 12:43:13 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.481 12:43:13 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.481 12:43:13 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.481 12:43:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:47.481 12:43:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:47.740 [2024-11-27 12:43:13.890895] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:47.740 [2024-11-27 12:43:13.890945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3972424 ] 00:04:47.740 [2024-11-27 12:43:13.980110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.740 [2024-11-27 12:43:14.021567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.675 12:43:14 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.675 12:43:14 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:48.675 12:43:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:48.675 12:43:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:48.675 12:43:14 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.675 12:43:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:48.675 { 00:04:48.675 "filename": "/tmp/spdk_mem_dump.txt" 00:04:48.675 } 00:04:48.675 12:43:14 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.675 12:43:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:48.675 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:48.675 1 heaps totaling size 818.000000 MiB 00:04:48.675 size: 818.000000 MiB heap id: 0 00:04:48.675 end heaps---------- 00:04:48.675 9 mempools totaling size 603.782043 MiB 00:04:48.675 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:48.675 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:48.675 size: 100.555481 MiB name: bdev_io_3972424 00:04:48.675 size: 50.003479 MiB name: msgpool_3972424 00:04:48.676 size: 36.509338 MiB name: fsdev_io_3972424 00:04:48.676 size: 21.763794 MiB name: PDU_Pool 00:04:48.676 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:48.676 size: 4.133484 MiB name: evtpool_3972424 00:04:48.676 size: 0.026123 MiB name: Session_Pool 00:04:48.676 end mempools------- 00:04:48.676 6 memzones totaling size 4.142822 MiB 00:04:48.676 size: 1.000366 MiB name: RG_ring_0_3972424 00:04:48.676 size: 1.000366 MiB name: RG_ring_1_3972424 00:04:48.676 size: 1.000366 MiB name: RG_ring_4_3972424 00:04:48.676 size: 1.000366 MiB name: RG_ring_5_3972424 00:04:48.676 size: 0.125366 MiB name: RG_ring_2_3972424 00:04:48.676 size: 0.015991 MiB name: RG_ring_3_3972424 00:04:48.676 end memzones------- 00:04:48.676 12:43:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:48.676 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:48.676 list of free elements. size: 10.852478 MiB 00:04:48.676 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:48.676 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:48.676 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:48.676 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:48.676 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:48.676 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:48.676 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:48.676 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:48.676 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:48.676 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:48.676 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:48.676 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:48.676 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:48.676 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:48.676 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:48.676 list of standard malloc elements. size: 199.218628 MiB 00:04:48.676 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:48.676 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:48.676 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:48.676 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:48.676 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:48.676 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:48.676 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:48.676 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:48.676 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:48.676 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:48.676 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:48.676 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:48.676 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:48.676 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:48.676 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:48.676 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:48.676 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:48.676 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:48.676 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:48.676 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:48.676 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:48.676 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:48.676 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:48.676 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:48.676 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:48.676 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:48.676 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:48.676 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:48.676 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:48.676 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:48.676 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:48.676 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:48.676 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:48.676 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:48.676 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:48.676 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:48.676 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:48.676 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:48.676 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:48.676 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:48.676 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:48.676 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:48.676 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:48.676 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:48.676 list of memzone associated elements. size: 607.928894 MiB 00:04:48.676 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:48.676 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:48.676 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:48.676 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:48.676 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:48.676 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3972424_0 00:04:48.676 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:48.676 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3972424_0 00:04:48.676 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:48.676 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3972424_0 00:04:48.676 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:48.676 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:48.676 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:48.676 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:48.676 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:48.676 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3972424_0 00:04:48.676 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:48.676 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3972424 00:04:48.676 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:48.676 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3972424 00:04:48.676 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:48.676 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:48.676 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:48.676 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:48.676 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:48.676 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:48.676 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:48.676 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:48.676 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:48.676 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3972424 00:04:48.676 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:48.676 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3972424 00:04:48.676 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:48.676 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3972424 00:04:48.677 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:48.677 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3972424 00:04:48.677 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:48.677 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3972424 00:04:48.677 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:48.677 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3972424 00:04:48.677 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:48.677 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:48.677 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:48.677 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:48.677 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:48.677 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:48.677 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:48.677 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3972424 00:04:48.677 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:48.677 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3972424 00:04:48.677 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:48.677 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:48.677 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:48.677 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:48.677 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:48.677 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3972424 00:04:48.677 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:48.677 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:48.677 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:48.677 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3972424 00:04:48.677 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:48.677 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3972424 00:04:48.677 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:48.677 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3972424 00:04:48.677 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:48.677 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:48.677 12:43:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:48.677 12:43:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3972424 00:04:48.677 12:43:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3972424 ']' 00:04:48.677 12:43:14 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3972424 00:04:48.677 12:43:14 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:48.677 12:43:14 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.677 12:43:14 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3972424 00:04:48.677 12:43:14 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.677 12:43:14 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.677 12:43:14 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3972424' 00:04:48.677 killing process with pid 3972424 00:04:48.677 12:43:14 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3972424 00:04:48.677 12:43:14 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3972424 00:04:48.936 00:04:48.936 real 0m1.525s 00:04:48.936 user 0m1.530s 00:04:48.936 sys 0m0.508s 00:04:48.936 12:43:15 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.936 12:43:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:48.936 ************************************ 00:04:48.936 END TEST dpdk_mem_utility 00:04:48.936 ************************************ 00:04:48.936 12:43:15 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:48.936 12:43:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.936 12:43:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.936 12:43:15 -- common/autotest_common.sh@10 -- # set +x 00:04:48.936 ************************************ 00:04:48.936 START TEST event 00:04:48.936 ************************************ 00:04:48.936 12:43:15 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:49.194 * Looking for test storage... 00:04:49.194 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:04:49.194 12:43:15 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:49.195 12:43:15 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:49.195 12:43:15 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:49.195 12:43:15 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:49.195 12:43:15 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.195 12:43:15 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.195 12:43:15 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.195 12:43:15 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.195 12:43:15 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.195 12:43:15 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.195 12:43:15 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.195 12:43:15 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.195 12:43:15 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.195 12:43:15 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.195 12:43:15 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.195 12:43:15 event -- scripts/common.sh@344 -- # case "$op" in 00:04:49.195 12:43:15 event -- scripts/common.sh@345 -- # : 1 00:04:49.195 12:43:15 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.195 12:43:15 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.195 12:43:15 event -- scripts/common.sh@365 -- # decimal 1 00:04:49.195 12:43:15 event -- scripts/common.sh@353 -- # local d=1 00:04:49.195 12:43:15 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.195 12:43:15 event -- scripts/common.sh@355 -- # echo 1 00:04:49.195 12:43:15 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.195 12:43:15 event -- scripts/common.sh@366 -- # decimal 2 00:04:49.195 12:43:15 event -- scripts/common.sh@353 -- # local d=2 00:04:49.195 12:43:15 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.195 12:43:15 event -- scripts/common.sh@355 -- # echo 2 00:04:49.195 12:43:15 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.195 12:43:15 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.195 12:43:15 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.195 12:43:15 event -- scripts/common.sh@368 -- # return 0 00:04:49.195 12:43:15 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.195 12:43:15 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:49.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.195 --rc genhtml_branch_coverage=1 00:04:49.195 --rc genhtml_function_coverage=1 00:04:49.195 --rc genhtml_legend=1 00:04:49.195 --rc geninfo_all_blocks=1 00:04:49.195 --rc geninfo_unexecuted_blocks=1 00:04:49.195 00:04:49.195 ' 00:04:49.195 12:43:15 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:49.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.195 --rc genhtml_branch_coverage=1 00:04:49.195 --rc genhtml_function_coverage=1 00:04:49.195 --rc genhtml_legend=1 00:04:49.195 --rc geninfo_all_blocks=1 00:04:49.195 --rc geninfo_unexecuted_blocks=1 00:04:49.195 00:04:49.195 ' 00:04:49.195 12:43:15 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:49.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.195 --rc genhtml_branch_coverage=1 00:04:49.195 --rc genhtml_function_coverage=1 00:04:49.195 --rc genhtml_legend=1 00:04:49.195 --rc geninfo_all_blocks=1 00:04:49.195 --rc geninfo_unexecuted_blocks=1 00:04:49.195 00:04:49.195 ' 00:04:49.195 12:43:15 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:49.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.195 --rc genhtml_branch_coverage=1 00:04:49.195 --rc genhtml_function_coverage=1 00:04:49.195 --rc genhtml_legend=1 00:04:49.195 --rc geninfo_all_blocks=1 00:04:49.195 --rc geninfo_unexecuted_blocks=1 00:04:49.195 00:04:49.195 ' 00:04:49.195 12:43:15 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:49.195 12:43:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:49.195 12:43:15 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:49.195 12:43:15 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:49.195 12:43:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.195 12:43:15 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.195 ************************************ 00:04:49.195 START TEST event_perf 00:04:49.195 ************************************ 00:04:49.195 12:43:15 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:49.195 Running I/O for 1 seconds...[2024-11-27 12:43:15.501245] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:49.195 [2024-11-27 12:43:15.501325] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3972763 ] 00:04:49.453 [2024-11-27 12:43:15.591347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:49.453 [2024-11-27 12:43:15.633445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.453 [2024-11-27 12:43:15.633531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.453 [2024-11-27 12:43:15.633624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.453 [2024-11-27 12:43:15.633629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.388 Running I/O for 1 seconds... 00:04:50.388 lcore 0: 216028 00:04:50.388 lcore 1: 216028 00:04:50.388 lcore 2: 216027 00:04:50.388 lcore 3: 216026 00:04:50.388 done. 00:04:50.388 00:04:50.388 real 0m1.193s 00:04:50.388 user 0m4.095s 00:04:50.388 sys 0m0.094s 00:04:50.388 12:43:16 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.388 12:43:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:50.388 ************************************ 00:04:50.388 END TEST event_perf 00:04:50.388 ************************************ 00:04:50.388 12:43:16 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:50.388 12:43:16 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:50.388 12:43:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.388 12:43:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.388 ************************************ 00:04:50.388 START TEST event_reactor 00:04:50.388 ************************************ 00:04:50.388 12:43:16 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:50.388 [2024-11-27 12:43:16.767395] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:50.388 [2024-11-27 12:43:16.767450] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973046 ] 00:04:50.647 [2024-11-27 12:43:16.855363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.647 [2024-11-27 12:43:16.893907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.582 test_start 00:04:51.582 oneshot 00:04:51.582 tick 100 00:04:51.582 tick 100 00:04:51.582 tick 250 00:04:51.582 tick 100 00:04:51.582 tick 100 00:04:51.582 tick 250 00:04:51.582 tick 100 00:04:51.582 tick 500 00:04:51.582 tick 100 00:04:51.582 tick 100 00:04:51.582 tick 250 00:04:51.582 tick 100 00:04:51.582 tick 100 00:04:51.582 test_end 00:04:51.582 00:04:51.582 real 0m1.183s 00:04:51.582 user 0m1.089s 00:04:51.582 sys 0m0.090s 00:04:51.582 12:43:17 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.582 12:43:17 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:51.582 ************************************ 00:04:51.582 END TEST event_reactor 00:04:51.582 ************************************ 00:04:51.582 12:43:17 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:51.582 12:43:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:51.582 12:43:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.582 12:43:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.840 ************************************ 00:04:51.840 START TEST event_reactor_perf 00:04:51.840 ************************************ 00:04:51.840 12:43:18 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:51.840 [2024-11-27 12:43:18.023935] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:51.840 [2024-11-27 12:43:18.024002] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973327 ] 00:04:51.840 [2024-11-27 12:43:18.114299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.840 [2024-11-27 12:43:18.151881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.215 test_start 00:04:53.215 test_end 00:04:53.215 Performance: 537634 events per second 00:04:53.215 00:04:53.215 real 0m1.186s 00:04:53.215 user 0m1.087s 00:04:53.215 sys 0m0.094s 00:04:53.215 12:43:19 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.215 12:43:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:53.215 ************************************ 00:04:53.215 END TEST event_reactor_perf 00:04:53.215 ************************************ 00:04:53.215 12:43:19 event -- event/event.sh@49 -- # uname -s 00:04:53.215 12:43:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:53.215 12:43:19 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:53.215 12:43:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.215 12:43:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.215 12:43:19 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.215 ************************************ 00:04:53.215 START TEST event_scheduler 00:04:53.215 ************************************ 00:04:53.215 12:43:19 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:53.215 * Looking for test storage... 00:04:53.215 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:04:53.215 12:43:19 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.215 12:43:19 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.215 12:43:19 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.215 12:43:19 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.215 12:43:19 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:53.215 12:43:19 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.215 12:43:19 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.215 --rc genhtml_branch_coverage=1 00:04:53.215 --rc genhtml_function_coverage=1 00:04:53.215 --rc genhtml_legend=1 00:04:53.215 --rc geninfo_all_blocks=1 00:04:53.215 --rc geninfo_unexecuted_blocks=1 00:04:53.215 00:04:53.215 ' 00:04:53.215 12:43:19 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.215 --rc genhtml_branch_coverage=1 00:04:53.215 --rc genhtml_function_coverage=1 00:04:53.215 --rc genhtml_legend=1 00:04:53.215 --rc geninfo_all_blocks=1 00:04:53.215 --rc geninfo_unexecuted_blocks=1 00:04:53.215 00:04:53.215 ' 00:04:53.216 12:43:19 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.216 --rc genhtml_branch_coverage=1 00:04:53.216 --rc genhtml_function_coverage=1 00:04:53.216 --rc genhtml_legend=1 00:04:53.216 --rc geninfo_all_blocks=1 00:04:53.216 --rc geninfo_unexecuted_blocks=1 00:04:53.216 00:04:53.216 ' 00:04:53.216 12:43:19 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.216 --rc genhtml_branch_coverage=1 00:04:53.216 --rc genhtml_function_coverage=1 00:04:53.216 --rc genhtml_legend=1 00:04:53.216 --rc geninfo_all_blocks=1 00:04:53.216 --rc geninfo_unexecuted_blocks=1 00:04:53.216 00:04:53.216 ' 00:04:53.216 12:43:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:53.216 12:43:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3973653 00:04:53.216 12:43:19 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.216 12:43:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:53.216 12:43:19 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3973653 00:04:53.216 12:43:19 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3973653 ']' 00:04:53.216 12:43:19 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.216 12:43:19 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.216 12:43:19 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.216 12:43:19 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.216 12:43:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.216 [2024-11-27 12:43:19.494301] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:53.216 [2024-11-27 12:43:19.494354] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973653 ] 00:04:53.216 [2024-11-27 12:43:19.575219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:53.474 [2024-11-27 12:43:19.618117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.474 [2024-11-27 12:43:19.618204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.474 [2024-11-27 12:43:19.618285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:53.474 [2024-11-27 12:43:19.618288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.041 12:43:20 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.041 12:43:20 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:54.041 12:43:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:54.041 12:43:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.041 12:43:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.041 [2024-11-27 12:43:20.332717] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:54.041 [2024-11-27 12:43:20.332744] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:54.041 [2024-11-27 12:43:20.332755] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:54.041 [2024-11-27 12:43:20.332763] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:54.041 [2024-11-27 12:43:20.332770] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:54.041 12:43:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.041 12:43:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:54.041 12:43:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.041 12:43:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.041 [2024-11-27 12:43:20.409596] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:54.041 12:43:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.041 12:43:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:54.041 12:43:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.041 12:43:20 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.041 12:43:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.299 ************************************ 00:04:54.299 START TEST scheduler_create_thread 00:04:54.299 ************************************ 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.299 2 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.299 3 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.299 4 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.299 5 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.299 6 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.299 7 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.299 8 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.299 9 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.299 10 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.299 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.300 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.300 12:43:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:54.300 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.300 12:43:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.673 12:43:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.673 12:43:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:55.673 12:43:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:55.673 12:43:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.673 12:43:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.078 12:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.078 00:04:57.078 real 0m2.619s 00:04:57.078 user 0m0.025s 00:04:57.078 sys 0m0.005s 00:04:57.078 12:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.078 12:43:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.078 ************************************ 00:04:57.078 END TEST scheduler_create_thread 00:04:57.078 ************************************ 00:04:57.078 12:43:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:57.078 12:43:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3973653 00:04:57.078 12:43:23 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3973653 ']' 00:04:57.078 12:43:23 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3973653 00:04:57.078 12:43:23 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:57.078 12:43:23 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.078 12:43:23 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3973653 00:04:57.078 12:43:23 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:57.078 12:43:23 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:57.078 12:43:23 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3973653' 00:04:57.078 killing process with pid 3973653 00:04:57.078 12:43:23 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3973653 00:04:57.078 12:43:23 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3973653 00:04:57.336 [2024-11-27 12:43:23.547579] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:57.336 00:04:57.336 real 0m4.448s 00:04:57.336 user 0m8.417s 00:04:57.336 sys 0m0.461s 00:04:57.336 12:43:23 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.336 12:43:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.336 ************************************ 00:04:57.336 END TEST event_scheduler 00:04:57.336 ************************************ 00:04:57.594 12:43:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:57.594 12:43:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:57.594 12:43:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.594 12:43:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.594 12:43:23 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.594 ************************************ 00:04:57.594 START TEST app_repeat 00:04:57.594 ************************************ 00:04:57.594 12:43:23 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:57.594 12:43:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.594 12:43:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.594 12:43:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:57.594 12:43:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.594 12:43:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:57.594 12:43:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:57.594 12:43:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:57.594 12:43:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3974458 00:04:57.594 12:43:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.594 12:43:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3974458' 00:04:57.594 Process app_repeat pid: 3974458 00:04:57.594 12:43:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.594 12:43:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:57.594 spdk_app_start Round 0 00:04:57.594 12:43:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3974458 /var/tmp/spdk-nbd.sock 00:04:57.594 12:43:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3974458 ']' 00:04:57.594 12:43:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.594 12:43:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.594 12:43:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.594 12:43:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.594 12:43:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.594 12:43:23 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:57.594 [2024-11-27 12:43:23.827671] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:57.594 [2024-11-27 12:43:23.827728] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3974458 ] 00:04:57.594 [2024-11-27 12:43:23.918081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.594 [2024-11-27 12:43:23.959442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.594 [2024-11-27 12:43:23.959445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.528 12:43:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.528 12:43:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:58.528 12:43:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.528 Malloc0 00:04:58.528 12:43:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.786 Malloc1 00:04:58.786 12:43:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.786 12:43:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.786 12:43:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.786 12:43:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.786 12:43:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.786 12:43:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.786 12:43:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.786 12:43:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.786 12:43:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.786 12:43:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.786 12:43:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.786 12:43:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.786 12:43:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:58.786 12:43:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.786 12:43:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.786 12:43:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.044 /dev/nbd0 00:04:59.044 12:43:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.044 12:43:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.044 12:43:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:59.044 12:43:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:59.044 12:43:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:59.044 12:43:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:59.044 12:43:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:59.044 12:43:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:59.044 12:43:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:59.044 12:43:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:59.044 12:43:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.044 1+0 records in 00:04:59.044 1+0 records out 00:04:59.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212778 s, 19.3 MB/s 00:04:59.044 12:43:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:59.044 12:43:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:59.044 12:43:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:59.044 12:43:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:59.044 12:43:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:59.044 12:43:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.044 12:43:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.044 12:43:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.302 /dev/nbd1 00:04:59.302 12:43:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.302 12:43:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.303 12:43:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:59.303 12:43:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:59.303 12:43:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:59.303 12:43:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:59.303 12:43:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:59.303 12:43:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:59.303 12:43:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:59.303 12:43:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:59.303 12:43:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.303 1+0 records in 00:04:59.303 1+0 records out 00:04:59.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221591 s, 18.5 MB/s 00:04:59.303 12:43:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:59.303 12:43:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:59.303 12:43:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:59.303 12:43:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:59.303 12:43:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:59.303 12:43:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.303 12:43:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.303 12:43:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.303 12:43:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.303 12:43:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.561 { 00:04:59.561 "nbd_device": "/dev/nbd0", 00:04:59.561 "bdev_name": "Malloc0" 00:04:59.561 }, 00:04:59.561 { 00:04:59.561 "nbd_device": "/dev/nbd1", 00:04:59.561 "bdev_name": "Malloc1" 00:04:59.561 } 00:04:59.561 ]' 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.561 { 00:04:59.561 "nbd_device": "/dev/nbd0", 00:04:59.561 "bdev_name": "Malloc0" 00:04:59.561 }, 00:04:59.561 { 00:04:59.561 "nbd_device": "/dev/nbd1", 00:04:59.561 "bdev_name": "Malloc1" 00:04:59.561 } 00:04:59.561 ]' 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.561 /dev/nbd1' 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.561 /dev/nbd1' 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.561 256+0 records in 00:04:59.561 256+0 records out 00:04:59.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103671 s, 101 MB/s 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.561 256+0 records in 00:04:59.561 256+0 records out 00:04:59.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181807 s, 57.7 MB/s 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.561 256+0 records in 00:04:59.561 256+0 records out 00:04:59.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200411 s, 52.3 MB/s 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.561 12:43:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.819 12:43:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.819 12:43:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.819 12:43:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.819 12:43:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.819 12:43:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.819 12:43:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.819 12:43:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.819 12:43:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.819 12:43:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.819 12:43:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.076 12:43:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.076 12:43:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.076 12:43:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.076 12:43:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.076 12:43:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.076 12:43:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.076 12:43:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.076 12:43:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.076 12:43:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.076 12:43:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.076 12:43:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.334 12:43:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.334 12:43:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.334 12:43:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.334 12:43:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.334 12:43:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.334 12:43:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.334 12:43:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:00.334 12:43:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.334 12:43:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.334 12:43:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.334 12:43:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.334 12:43:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.334 12:43:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.592 12:43:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:00.592 [2024-11-27 12:43:26.890186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.592 [2024-11-27 12:43:26.924931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.592 [2024-11-27 12:43:26.924933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.592 [2024-11-27 12:43:26.965553] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:00.592 [2024-11-27 12:43:26.965596] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:03.875 12:43:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:03.875 12:43:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:03.875 spdk_app_start Round 1 00:05:03.875 12:43:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3974458 /var/tmp/spdk-nbd.sock 00:05:03.875 12:43:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3974458 ']' 00:05:03.875 12:43:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.875 12:43:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.875 12:43:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.875 12:43:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.875 12:43:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.875 12:43:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.875 12:43:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:03.875 12:43:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.875 Malloc0 00:05:03.875 12:43:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.132 Malloc1 00:05:04.132 12:43:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.132 /dev/nbd0 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.132 12:43:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.132 12:43:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:04.132 12:43:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.132 12:43:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.132 12:43:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.132 12:43:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:04.132 12:43:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.132 12:43:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.132 12:43:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.390 12:43:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.390 1+0 records in 00:05:04.390 1+0 records out 00:05:04.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222719 s, 18.4 MB/s 00:05:04.390 12:43:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:04.390 12:43:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.390 12:43:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:04.390 12:43:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.390 12:43:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.390 12:43:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.390 12:43:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.390 12:43:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.390 /dev/nbd1 00:05:04.390 12:43:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:04.390 12:43:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:04.390 12:43:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:04.390 12:43:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.390 12:43:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.390 12:43:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.390 12:43:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:04.390 12:43:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.390 12:43:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.390 12:43:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.390 12:43:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.390 1+0 records in 00:05:04.390 1+0 records out 00:05:04.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000161485 s, 25.4 MB/s 00:05:04.390 12:43:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:04.648 12:43:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.648 12:43:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:04.648 12:43:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.648 12:43:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.648 12:43:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.648 12:43:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.648 12:43:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.648 12:43:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.648 12:43:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.648 12:43:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.648 { 00:05:04.648 "nbd_device": "/dev/nbd0", 00:05:04.648 "bdev_name": "Malloc0" 00:05:04.648 }, 00:05:04.648 { 00:05:04.648 "nbd_device": "/dev/nbd1", 00:05:04.648 "bdev_name": "Malloc1" 00:05:04.648 } 00:05:04.648 ]' 00:05:04.648 12:43:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.648 { 00:05:04.648 "nbd_device": "/dev/nbd0", 00:05:04.648 "bdev_name": "Malloc0" 00:05:04.648 }, 00:05:04.648 { 00:05:04.648 "nbd_device": "/dev/nbd1", 00:05:04.648 "bdev_name": "Malloc1" 00:05:04.648 } 00:05:04.648 ]' 00:05:04.648 12:43:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.648 12:43:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:04.648 /dev/nbd1' 00:05:04.648 12:43:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:04.648 /dev/nbd1' 00:05:04.648 12:43:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.648 12:43:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:04.648 12:43:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:04.648 12:43:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:04.648 12:43:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:04.648 12:43:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:04.648 12:43:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.648 12:43:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.648 12:43:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:04.648 12:43:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.648 12:43:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:04.648 12:43:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:04.648 256+0 records in 00:05:04.648 256+0 records out 00:05:04.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110054 s, 95.3 MB/s 00:05:04.648 12:43:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.648 12:43:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.905 256+0 records in 00:05:04.905 256+0 records out 00:05:04.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191225 s, 54.8 MB/s 00:05:04.905 12:43:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.905 12:43:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.905 256+0 records in 00:05:04.905 256+0 records out 00:05:04.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202442 s, 51.8 MB/s 00:05:04.905 12:43:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.905 12:43:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.905 12:43:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.905 12:43:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.905 12:43:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.905 12:43:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.905 12:43:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.905 12:43:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.905 12:43:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.905 12:43:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.905 12:43:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.905 12:43:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.906 12:43:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.906 12:43:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.906 12:43:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.906 12:43:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.906 12:43:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:04.906 12:43:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.906 12:43:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.906 12:43:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.164 12:43:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.422 12:43:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.422 12:43:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.422 12:43:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.422 12:43:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.422 12:43:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.422 12:43:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.422 12:43:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:05.422 12:43:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.422 12:43:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.422 12:43:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.422 12:43:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.422 12:43:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.422 12:43:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:05.680 12:43:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:05.939 [2024-11-27 12:43:32.088836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.939 [2024-11-27 12:43:32.124186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.939 [2024-11-27 12:43:32.124188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.939 [2024-11-27 12:43:32.165797] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:05.939 [2024-11-27 12:43:32.165840] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.222 12:43:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.222 12:43:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:09.222 spdk_app_start Round 2 00:05:09.222 12:43:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3974458 /var/tmp/spdk-nbd.sock 00:05:09.222 12:43:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3974458 ']' 00:05:09.222 12:43:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.222 12:43:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.222 12:43:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.222 12:43:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.222 12:43:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.222 12:43:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.222 12:43:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:09.222 12:43:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.222 Malloc0 00:05:09.222 12:43:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.222 Malloc1 00:05:09.222 12:43:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.222 12:43:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.222 12:43:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.222 12:43:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.222 12:43:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.222 12:43:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.222 12:43:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.222 12:43:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.222 12:43:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.222 12:43:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.222 12:43:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.222 12:43:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.222 12:43:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:09.222 12:43:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.222 12:43:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.222 12:43:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.480 /dev/nbd0 00:05:09.480 12:43:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.480 12:43:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.480 12:43:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:09.480 12:43:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:09.480 12:43:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:09.480 12:43:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:09.480 12:43:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:09.480 12:43:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:09.480 12:43:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:09.480 12:43:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:09.480 12:43:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.480 1+0 records in 00:05:09.480 1+0 records out 00:05:09.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220979 s, 18.5 MB/s 00:05:09.480 12:43:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:09.480 12:43:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:09.480 12:43:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:09.480 12:43:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:09.480 12:43:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:09.480 12:43:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.480 12:43:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.480 12:43:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.738 /dev/nbd1 00:05:09.738 12:43:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.738 12:43:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.738 12:43:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:09.738 12:43:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:09.738 12:43:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:09.738 12:43:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:09.738 12:43:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:09.738 12:43:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:09.738 12:43:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:09.738 12:43:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:09.738 12:43:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.738 1+0 records in 00:05:09.738 1+0 records out 00:05:09.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258325 s, 15.9 MB/s 00:05:09.738 12:43:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:09.738 12:43:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:09.738 12:43:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:09.738 12:43:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:09.738 12:43:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:09.738 12:43:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.738 12:43:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.738 12:43:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.738 12:43:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.738 12:43:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.996 { 00:05:09.996 "nbd_device": "/dev/nbd0", 00:05:09.996 "bdev_name": "Malloc0" 00:05:09.996 }, 00:05:09.996 { 00:05:09.996 "nbd_device": "/dev/nbd1", 00:05:09.996 "bdev_name": "Malloc1" 00:05:09.996 } 00:05:09.996 ]' 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.996 { 00:05:09.996 "nbd_device": "/dev/nbd0", 00:05:09.996 "bdev_name": "Malloc0" 00:05:09.996 }, 00:05:09.996 { 00:05:09.996 "nbd_device": "/dev/nbd1", 00:05:09.996 "bdev_name": "Malloc1" 00:05:09.996 } 00:05:09.996 ]' 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.996 /dev/nbd1' 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.996 /dev/nbd1' 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.996 256+0 records in 00:05:09.996 256+0 records out 00:05:09.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104124 s, 101 MB/s 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.996 256+0 records in 00:05:09.996 256+0 records out 00:05:09.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192297 s, 54.5 MB/s 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.996 256+0 records in 00:05:09.996 256+0 records out 00:05:09.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206529 s, 50.8 MB/s 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.996 12:43:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.253 12:43:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.253 12:43:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.253 12:43:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.253 12:43:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.253 12:43:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.253 12:43:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.253 12:43:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.253 12:43:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.253 12:43:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.253 12:43:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.510 12:43:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.510 12:43:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.510 12:43:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.510 12:43:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.510 12:43:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.510 12:43:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.510 12:43:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.510 12:43:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.510 12:43:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.510 12:43:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.510 12:43:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.766 12:43:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.766 12:43:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.766 12:43:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.766 12:43:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.766 12:43:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.766 12:43:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.766 12:43:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:10.766 12:43:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.766 12:43:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.766 12:43:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.766 12:43:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.766 12:43:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.766 12:43:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.024 12:43:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:11.024 [2024-11-27 12:43:37.388789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.280 [2024-11-27 12:43:37.425044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.280 [2024-11-27 12:43:37.425047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.280 [2024-11-27 12:43:37.465817] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.280 [2024-11-27 12:43:37.465860] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.558 12:43:40 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3974458 /var/tmp/spdk-nbd.sock 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3974458 ']' 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.558 12:43:40 event.app_repeat -- event/event.sh@39 -- # killprocess 3974458 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3974458 ']' 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3974458 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3974458 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3974458' 00:05:14.558 killing process with pid 3974458 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3974458 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3974458 00:05:14.558 spdk_app_start is called in Round 0. 00:05:14.558 Shutdown signal received, stop current app iteration 00:05:14.558 Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 reinitialization... 00:05:14.558 spdk_app_start is called in Round 1. 00:05:14.558 Shutdown signal received, stop current app iteration 00:05:14.558 Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 reinitialization... 00:05:14.558 spdk_app_start is called in Round 2. 00:05:14.558 Shutdown signal received, stop current app iteration 00:05:14.558 Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 reinitialization... 00:05:14.558 spdk_app_start is called in Round 3. 00:05:14.558 Shutdown signal received, stop current app iteration 00:05:14.558 12:43:40 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:14.558 12:43:40 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:14.558 00:05:14.558 real 0m16.836s 00:05:14.558 user 0m36.339s 00:05:14.558 sys 0m3.038s 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.558 12:43:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.558 ************************************ 00:05:14.558 END TEST app_repeat 00:05:14.558 ************************************ 00:05:14.558 12:43:40 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:14.558 12:43:40 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:14.558 12:43:40 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.558 12:43:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.558 12:43:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.558 ************************************ 00:05:14.558 START TEST cpu_locks 00:05:14.558 ************************************ 00:05:14.558 12:43:40 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:14.558 * Looking for test storage... 00:05:14.558 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:14.558 12:43:40 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:14.558 12:43:40 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:14.558 12:43:40 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:14.558 12:43:40 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.558 12:43:40 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:14.558 12:43:40 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.558 12:43:40 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:14.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.558 --rc genhtml_branch_coverage=1 00:05:14.558 --rc genhtml_function_coverage=1 00:05:14.558 --rc genhtml_legend=1 00:05:14.558 --rc geninfo_all_blocks=1 00:05:14.558 --rc geninfo_unexecuted_blocks=1 00:05:14.558 00:05:14.558 ' 00:05:14.558 12:43:40 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:14.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.558 --rc genhtml_branch_coverage=1 00:05:14.558 --rc genhtml_function_coverage=1 00:05:14.558 --rc genhtml_legend=1 00:05:14.558 --rc geninfo_all_blocks=1 00:05:14.558 --rc geninfo_unexecuted_blocks=1 00:05:14.558 00:05:14.558 ' 00:05:14.558 12:43:40 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:14.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.558 --rc genhtml_branch_coverage=1 00:05:14.558 --rc genhtml_function_coverage=1 00:05:14.558 --rc genhtml_legend=1 00:05:14.558 --rc geninfo_all_blocks=1 00:05:14.558 --rc geninfo_unexecuted_blocks=1 00:05:14.558 00:05:14.558 ' 00:05:14.558 12:43:40 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:14.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.558 --rc genhtml_branch_coverage=1 00:05:14.559 --rc genhtml_function_coverage=1 00:05:14.559 --rc genhtml_legend=1 00:05:14.559 --rc geninfo_all_blocks=1 00:05:14.559 --rc geninfo_unexecuted_blocks=1 00:05:14.559 00:05:14.559 ' 00:05:14.559 12:43:40 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:14.559 12:43:40 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:14.559 12:43:40 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:14.559 12:43:40 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:14.559 12:43:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.559 12:43:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.559 12:43:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.559 ************************************ 00:05:14.559 START TEST default_locks 00:05:14.559 ************************************ 00:05:14.559 12:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:14.817 12:43:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3977676 00:05:14.817 12:43:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3977676 00:05:14.817 12:43:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.817 12:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3977676 ']' 00:05:14.817 12:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.817 12:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.817 12:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.817 12:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.817 12:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.817 [2024-11-27 12:43:40.992093] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:14.817 [2024-11-27 12:43:40.992137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3977676 ] 00:05:14.817 [2024-11-27 12:43:41.081492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.817 [2024-11-27 12:43:41.120241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.751 12:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.751 12:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:15.751 12:43:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3977676 00:05:15.751 12:43:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.751 12:43:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3977676 00:05:16.009 lslocks: write error 00:05:16.009 12:43:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3977676 00:05:16.010 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3977676 ']' 00:05:16.010 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3977676 00:05:16.010 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:16.010 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.010 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3977676 00:05:16.010 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.010 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.010 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3977676' 00:05:16.010 killing process with pid 3977676 00:05:16.010 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3977676 00:05:16.010 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3977676 00:05:16.268 12:43:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3977676 00:05:16.268 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:16.268 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3977676 00:05:16.526 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:16.526 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.526 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:16.526 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.526 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3977676 00:05:16.526 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3977676 ']' 00:05:16.526 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.526 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.526 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.526 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.526 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.526 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3977676) - No such process 00:05:16.526 ERROR: process (pid: 3977676) is no longer running 00:05:16.527 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.527 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:16.527 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:16.527 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:16.527 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:16.527 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:16.527 12:43:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:16.527 12:43:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:16.527 12:43:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:16.527 12:43:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:16.527 00:05:16.527 real 0m1.720s 00:05:16.527 user 0m1.808s 00:05:16.527 sys 0m0.623s 00:05:16.527 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.527 12:43:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.527 ************************************ 00:05:16.527 END TEST default_locks 00:05:16.527 ************************************ 00:05:16.527 12:43:42 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:16.527 12:43:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.527 12:43:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.527 12:43:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.527 ************************************ 00:05:16.527 START TEST default_locks_via_rpc 00:05:16.527 ************************************ 00:05:16.527 12:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:16.527 12:43:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3977975 00:05:16.527 12:43:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3977975 00:05:16.527 12:43:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.527 12:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3977975 ']' 00:05:16.527 12:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.527 12:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.527 12:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.527 12:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.527 12:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.527 [2024-11-27 12:43:42.804587] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:16.527 [2024-11-27 12:43:42.804645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3977975 ] 00:05:16.527 [2024-11-27 12:43:42.892521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.784 [2024-11-27 12:43:42.930305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3977975 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3977975 00:05:17.351 12:43:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.917 12:43:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3977975 00:05:17.917 12:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3977975 ']' 00:05:17.917 12:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3977975 00:05:17.917 12:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:17.917 12:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.917 12:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3977975 00:05:18.175 12:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.175 12:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.175 12:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3977975' 00:05:18.175 killing process with pid 3977975 00:05:18.175 12:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3977975 00:05:18.175 12:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3977975 00:05:18.434 00:05:18.434 real 0m1.854s 00:05:18.434 user 0m1.965s 00:05:18.434 sys 0m0.666s 00:05:18.434 12:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.434 12:43:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.434 ************************************ 00:05:18.434 END TEST default_locks_via_rpc 00:05:18.434 ************************************ 00:05:18.434 12:43:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:18.434 12:43:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.434 12:43:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.434 12:43:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.434 ************************************ 00:05:18.434 START TEST non_locking_app_on_locked_coremask 00:05:18.434 ************************************ 00:05:18.434 12:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:18.434 12:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3978276 00:05:18.434 12:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3978276 /var/tmp/spdk.sock 00:05:18.434 12:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.434 12:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3978276 ']' 00:05:18.434 12:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.434 12:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.434 12:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.434 12:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.434 12:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.434 [2024-11-27 12:43:44.736407] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:18.434 [2024-11-27 12:43:44.736454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3978276 ] 00:05:18.692 [2024-11-27 12:43:44.824786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.692 [2024-11-27 12:43:44.865677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.258 12:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.258 12:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:19.258 12:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3978541 00:05:19.258 12:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3978541 /var/tmp/spdk2.sock 00:05:19.258 12:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:19.258 12:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3978541 ']' 00:05:19.258 12:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.258 12:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.258 12:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.258 12:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.258 12:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.258 [2024-11-27 12:43:45.612281] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:19.258 [2024-11-27 12:43:45.612334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3978541 ] 00:05:19.516 [2024-11-27 12:43:45.742838] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.516 [2024-11-27 12:43:45.742869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.516 [2024-11-27 12:43:45.817439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.083 12:43:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.083 12:43:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:20.083 12:43:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3978276 00:05:20.083 12:43:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3978276 00:05:20.083 12:43:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.457 lslocks: write error 00:05:21.457 12:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3978276 00:05:21.457 12:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3978276 ']' 00:05:21.457 12:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3978276 00:05:21.457 12:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:21.457 12:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.457 12:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3978276 00:05:21.457 12:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.457 12:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.457 12:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3978276' 00:05:21.457 killing process with pid 3978276 00:05:21.457 12:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3978276 00:05:21.457 12:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3978276 00:05:21.715 12:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3978541 00:05:21.715 12:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3978541 ']' 00:05:21.715 12:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3978541 00:05:21.973 12:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:21.973 12:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.973 12:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3978541 00:05:21.973 12:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.973 12:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.973 12:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3978541' 00:05:21.973 killing process with pid 3978541 00:05:21.973 12:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3978541 00:05:21.973 12:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3978541 00:05:22.231 00:05:22.231 real 0m3.774s 00:05:22.231 user 0m4.074s 00:05:22.231 sys 0m1.221s 00:05:22.231 12:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.231 12:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.231 ************************************ 00:05:22.231 END TEST non_locking_app_on_locked_coremask 00:05:22.232 ************************************ 00:05:22.232 12:43:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:22.232 12:43:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.232 12:43:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.232 12:43:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.232 ************************************ 00:05:22.232 START TEST locking_app_on_unlocked_coremask 00:05:22.232 ************************************ 00:05:22.232 12:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:22.232 12:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3979107 00:05:22.232 12:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3979107 /var/tmp/spdk.sock 00:05:22.232 12:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:22.232 12:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3979107 ']' 00:05:22.232 12:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.232 12:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.232 12:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.232 12:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.232 12:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.232 [2024-11-27 12:43:48.587493] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:22.232 [2024-11-27 12:43:48.587537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3979107 ] 00:05:22.490 [2024-11-27 12:43:48.675474] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.490 [2024-11-27 12:43:48.675500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.490 [2024-11-27 12:43:48.716392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.055 12:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.056 12:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:23.056 12:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3979129 00:05:23.056 12:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3979129 /var/tmp/spdk2.sock 00:05:23.056 12:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:23.056 12:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3979129 ']' 00:05:23.056 12:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.056 12:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.056 12:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.056 12:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.056 12:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.313 [2024-11-27 12:43:49.462350] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:23.313 [2024-11-27 12:43:49.462404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3979129 ] 00:05:23.313 [2024-11-27 12:43:49.593072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.313 [2024-11-27 12:43:49.668161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.279 12:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.279 12:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:24.279 12:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3979129 00:05:24.279 12:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3979129 00:05:24.279 12:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.264 lslocks: write error 00:05:25.264 12:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3979107 00:05:25.264 12:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3979107 ']' 00:05:25.264 12:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3979107 00:05:25.264 12:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:25.264 12:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.264 12:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3979107 00:05:25.264 12:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.264 12:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.264 12:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3979107' 00:05:25.264 killing process with pid 3979107 00:05:25.264 12:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3979107 00:05:25.264 12:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3979107 00:05:25.830 12:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3979129 00:05:25.830 12:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3979129 ']' 00:05:25.831 12:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3979129 00:05:25.831 12:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:25.831 12:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.831 12:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3979129 00:05:25.831 12:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.831 12:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.831 12:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3979129' 00:05:25.831 killing process with pid 3979129 00:05:25.831 12:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3979129 00:05:25.831 12:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3979129 00:05:26.088 00:05:26.088 real 0m3.824s 00:05:26.088 user 0m4.161s 00:05:26.088 sys 0m1.266s 00:05:26.088 12:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.088 12:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.088 ************************************ 00:05:26.088 END TEST locking_app_on_unlocked_coremask 00:05:26.088 ************************************ 00:05:26.088 12:43:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:26.088 12:43:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.088 12:43:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.088 12:43:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.088 ************************************ 00:05:26.088 START TEST locking_app_on_locked_coremask 00:05:26.088 ************************************ 00:05:26.088 12:43:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:26.088 12:43:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3979699 00:05:26.088 12:43:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3979699 /var/tmp/spdk.sock 00:05:26.088 12:43:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.088 12:43:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3979699 ']' 00:05:26.088 12:43:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.088 12:43:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.088 12:43:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.088 12:43:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.088 12:43:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.344 [2024-11-27 12:43:52.491389] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:26.344 [2024-11-27 12:43:52.491436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3979699 ] 00:05:26.344 [2024-11-27 12:43:52.575914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.344 [2024-11-27 12:43:52.613314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3979963 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3979963 /var/tmp/spdk2.sock 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3979963 /var/tmp/spdk2.sock 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3979963 /var/tmp/spdk2.sock 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3979963 ']' 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.277 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.277 [2024-11-27 12:43:53.350397] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:27.277 [2024-11-27 12:43:53.350446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3979963 ] 00:05:27.277 [2024-11-27 12:43:53.479913] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3979699 has claimed it. 00:05:27.277 [2024-11-27 12:43:53.479957] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:27.840 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3979963) - No such process 00:05:27.840 ERROR: process (pid: 3979963) is no longer running 00:05:27.840 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.840 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:27.840 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:27.840 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:27.840 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:27.840 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:27.840 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3979699 00:05:27.840 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3979699 00:05:27.840 12:43:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.096 lslocks: write error 00:05:28.096 12:43:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3979699 00:05:28.096 12:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3979699 ']' 00:05:28.096 12:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3979699 00:05:28.096 12:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.096 12:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.096 12:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3979699 00:05:28.353 12:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.353 12:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.353 12:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3979699' 00:05:28.353 killing process with pid 3979699 00:05:28.353 12:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3979699 00:05:28.353 12:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3979699 00:05:28.610 00:05:28.610 real 0m2.342s 00:05:28.610 user 0m2.591s 00:05:28.610 sys 0m0.703s 00:05:28.610 12:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.610 12:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.610 ************************************ 00:05:28.610 END TEST locking_app_on_locked_coremask 00:05:28.610 ************************************ 00:05:28.610 12:43:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:28.610 12:43:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.610 12:43:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.610 12:43:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.610 ************************************ 00:05:28.610 START TEST locking_overlapped_coremask 00:05:28.610 ************************************ 00:05:28.610 12:43:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:28.610 12:43:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3980255 00:05:28.610 12:43:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3980255 /var/tmp/spdk.sock 00:05:28.610 12:43:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:28.610 12:43:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3980255 ']' 00:05:28.610 12:43:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.610 12:43:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.610 12:43:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.610 12:43:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.610 12:43:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.610 [2024-11-27 12:43:54.919422] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:28.610 [2024-11-27 12:43:54.919467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980255 ] 00:05:28.867 [2024-11-27 12:43:55.006466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.867 [2024-11-27 12:43:55.044596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.867 [2024-11-27 12:43:55.044691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.867 [2024-11-27 12:43:55.044694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3980282 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3980282 /var/tmp/spdk2.sock 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3980282 /var/tmp/spdk2.sock 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3980282 /var/tmp/spdk2.sock 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3980282 ']' 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.430 12:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.430 [2024-11-27 12:43:55.777142] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:29.430 [2024-11-27 12:43:55.777195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980282 ] 00:05:29.685 [2024-11-27 12:43:55.909914] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3980255 has claimed it. 00:05:29.685 [2024-11-27 12:43:55.909963] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:30.250 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3980282) - No such process 00:05:30.250 ERROR: process (pid: 3980282) is no longer running 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3980255 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3980255 ']' 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3980255 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3980255 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3980255' 00:05:30.250 killing process with pid 3980255 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3980255 00:05:30.250 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3980255 00:05:30.509 00:05:30.509 real 0m1.936s 00:05:30.509 user 0m5.495s 00:05:30.509 sys 0m0.508s 00:05:30.509 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.509 12:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.509 ************************************ 00:05:30.509 END TEST locking_overlapped_coremask 00:05:30.509 ************************************ 00:05:30.509 12:43:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:30.509 12:43:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.509 12:43:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.509 12:43:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.509 ************************************ 00:05:30.509 START TEST locking_overlapped_coremask_via_rpc 00:05:30.509 ************************************ 00:05:30.509 12:43:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:30.509 12:43:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3980568 00:05:30.509 12:43:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3980568 /var/tmp/spdk.sock 00:05:30.509 12:43:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:30.509 12:43:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3980568 ']' 00:05:30.509 12:43:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.509 12:43:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.509 12:43:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.509 12:43:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.509 12:43:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.767 [2024-11-27 12:43:56.927674] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:30.767 [2024-11-27 12:43:56.927723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980568 ] 00:05:30.767 [2024-11-27 12:43:57.015666] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.767 [2024-11-27 12:43:57.015694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.767 [2024-11-27 12:43:57.053299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.767 [2024-11-27 12:43:57.053393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.767 [2024-11-27 12:43:57.053396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.699 12:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.699 12:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:31.699 12:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3980812 00:05:31.699 12:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3980812 /var/tmp/spdk2.sock 00:05:31.699 12:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:31.699 12:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3980812 ']' 00:05:31.699 12:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.699 12:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.699 12:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.699 12:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.699 12:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.699 [2024-11-27 12:43:57.808559] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:31.699 [2024-11-27 12:43:57.808621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980812 ] 00:05:31.699 [2024-11-27 12:43:57.940361] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.699 [2024-11-27 12:43:57.940395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.699 [2024-11-27 12:43:58.022562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.699 [2024-11-27 12:43:58.025658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.699 [2024-11-27 12:43:58.025659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:32.261 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.261 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:32.261 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:32.261 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.261 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.519 [2024-11-27 12:43:58.659684] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3980568 has claimed it. 00:05:32.519 request: 00:05:32.519 { 00:05:32.519 "method": "framework_enable_cpumask_locks", 00:05:32.519 "req_id": 1 00:05:32.519 } 00:05:32.519 Got JSON-RPC error response 00:05:32.519 response: 00:05:32.519 { 00:05:32.519 "code": -32603, 00:05:32.519 "message": "Failed to claim CPU core: 2" 00:05:32.519 } 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3980568 /var/tmp/spdk.sock 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3980568 ']' 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3980812 /var/tmp/spdk2.sock 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3980812 ']' 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.519 12:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.776 12:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.776 12:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:32.776 12:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:32.776 12:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:32.776 12:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:32.776 12:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:32.776 00:05:32.776 real 0m2.192s 00:05:32.776 user 0m0.913s 00:05:32.776 sys 0m0.209s 00:05:32.776 12:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.776 12:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.776 ************************************ 00:05:32.776 END TEST locking_overlapped_coremask_via_rpc 00:05:32.776 ************************************ 00:05:32.776 12:43:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:32.776 12:43:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3980568 ]] 00:05:32.776 12:43:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3980568 00:05:32.776 12:43:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3980568 ']' 00:05:32.776 12:43:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3980568 00:05:32.776 12:43:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:32.776 12:43:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.776 12:43:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3980568 00:05:33.032 12:43:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.032 12:43:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.032 12:43:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3980568' 00:05:33.032 killing process with pid 3980568 00:05:33.032 12:43:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3980568 00:05:33.032 12:43:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3980568 00:05:33.288 12:43:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3980812 ]] 00:05:33.288 12:43:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3980812 00:05:33.288 12:43:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3980812 ']' 00:05:33.288 12:43:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3980812 00:05:33.288 12:43:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:33.288 12:43:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.288 12:43:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3980812 00:05:33.288 12:43:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:33.288 12:43:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:33.288 12:43:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3980812' 00:05:33.288 killing process with pid 3980812 00:05:33.288 12:43:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3980812 00:05:33.288 12:43:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3980812 00:05:33.546 12:43:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:33.546 12:43:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:33.546 12:43:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3980568 ]] 00:05:33.546 12:43:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3980568 00:05:33.546 12:43:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3980568 ']' 00:05:33.546 12:43:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3980568 00:05:33.546 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3980568) - No such process 00:05:33.546 12:43:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3980568 is not found' 00:05:33.546 Process with pid 3980568 is not found 00:05:33.546 12:43:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3980812 ]] 00:05:33.546 12:43:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3980812 00:05:33.546 12:43:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3980812 ']' 00:05:33.546 12:43:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3980812 00:05:33.546 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3980812) - No such process 00:05:33.546 12:43:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3980812 is not found' 00:05:33.546 Process with pid 3980812 is not found 00:05:33.546 12:43:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:33.546 00:05:33.546 real 0m19.164s 00:05:33.546 user 0m32.280s 00:05:33.546 sys 0m6.318s 00:05:33.546 12:43:59 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.546 12:43:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.546 ************************************ 00:05:33.546 END TEST cpu_locks 00:05:33.546 ************************************ 00:05:33.546 00:05:33.546 real 0m44.665s 00:05:33.546 user 1m23.569s 00:05:33.546 sys 0m10.540s 00:05:33.546 12:43:59 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.546 12:43:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.546 ************************************ 00:05:33.546 END TEST event 00:05:33.546 ************************************ 00:05:33.804 12:43:59 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:33.804 12:43:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.804 12:43:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.804 12:43:59 -- common/autotest_common.sh@10 -- # set +x 00:05:33.804 ************************************ 00:05:33.804 START TEST thread 00:05:33.804 ************************************ 00:05:33.804 12:43:59 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:33.804 * Looking for test storage... 00:05:33.804 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:05:33.804 12:44:00 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.804 12:44:00 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.804 12:44:00 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.804 12:44:00 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.804 12:44:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.804 12:44:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.804 12:44:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.804 12:44:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.804 12:44:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.804 12:44:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.804 12:44:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.804 12:44:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.804 12:44:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.804 12:44:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.804 12:44:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.804 12:44:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:33.804 12:44:00 thread -- scripts/common.sh@345 -- # : 1 00:05:33.804 12:44:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.804 12:44:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.804 12:44:00 thread -- scripts/common.sh@365 -- # decimal 1 00:05:33.804 12:44:00 thread -- scripts/common.sh@353 -- # local d=1 00:05:33.804 12:44:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.804 12:44:00 thread -- scripts/common.sh@355 -- # echo 1 00:05:33.804 12:44:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.804 12:44:00 thread -- scripts/common.sh@366 -- # decimal 2 00:05:34.060 12:44:00 thread -- scripts/common.sh@353 -- # local d=2 00:05:34.060 12:44:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.060 12:44:00 thread -- scripts/common.sh@355 -- # echo 2 00:05:34.060 12:44:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.060 12:44:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.060 12:44:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.060 12:44:00 thread -- scripts/common.sh@368 -- # return 0 00:05:34.060 12:44:00 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.060 12:44:00 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:34.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.060 --rc genhtml_branch_coverage=1 00:05:34.060 --rc genhtml_function_coverage=1 00:05:34.060 --rc genhtml_legend=1 00:05:34.060 --rc geninfo_all_blocks=1 00:05:34.060 --rc geninfo_unexecuted_blocks=1 00:05:34.060 00:05:34.060 ' 00:05:34.060 12:44:00 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:34.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.060 --rc genhtml_branch_coverage=1 00:05:34.060 --rc genhtml_function_coverage=1 00:05:34.060 --rc genhtml_legend=1 00:05:34.060 --rc geninfo_all_blocks=1 00:05:34.060 --rc geninfo_unexecuted_blocks=1 00:05:34.060 00:05:34.060 ' 00:05:34.060 12:44:00 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:34.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.060 --rc genhtml_branch_coverage=1 00:05:34.060 --rc genhtml_function_coverage=1 00:05:34.060 --rc genhtml_legend=1 00:05:34.060 --rc geninfo_all_blocks=1 00:05:34.060 --rc geninfo_unexecuted_blocks=1 00:05:34.060 00:05:34.060 ' 00:05:34.060 12:44:00 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:34.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.060 --rc genhtml_branch_coverage=1 00:05:34.060 --rc genhtml_function_coverage=1 00:05:34.060 --rc genhtml_legend=1 00:05:34.060 --rc geninfo_all_blocks=1 00:05:34.060 --rc geninfo_unexecuted_blocks=1 00:05:34.060 00:05:34.060 ' 00:05:34.060 12:44:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.060 12:44:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:34.060 12:44:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.060 12:44:00 thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.060 ************************************ 00:05:34.060 START TEST thread_poller_perf 00:05:34.060 ************************************ 00:05:34.060 12:44:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.060 [2024-11-27 12:44:00.246069] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:34.060 [2024-11-27 12:44:00.246125] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3981220 ] 00:05:34.060 [2024-11-27 12:44:00.336512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.060 [2024-11-27 12:44:00.376208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.060 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:35.430 [2024-11-27T11:44:01.815Z] ====================================== 00:05:35.430 [2024-11-27T11:44:01.815Z] busy:2510446826 (cyc) 00:05:35.430 [2024-11-27T11:44:01.815Z] total_run_count: 424000 00:05:35.430 [2024-11-27T11:44:01.815Z] tsc_hz: 2500000000 (cyc) 00:05:35.430 [2024-11-27T11:44:01.815Z] ====================================== 00:05:35.430 [2024-11-27T11:44:01.815Z] poller_cost: 5920 (cyc), 2368 (nsec) 00:05:35.430 00:05:35.430 real 0m1.190s 00:05:35.430 user 0m1.102s 00:05:35.430 sys 0m0.084s 00:05:35.430 12:44:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.430 12:44:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.430 ************************************ 00:05:35.430 END TEST thread_poller_perf 00:05:35.430 ************************************ 00:05:35.430 12:44:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:35.430 12:44:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:35.430 12:44:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.430 12:44:01 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.430 ************************************ 00:05:35.430 START TEST thread_poller_perf 00:05:35.430 ************************************ 00:05:35.430 12:44:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:35.430 [2024-11-27 12:44:01.519195] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:35.430 [2024-11-27 12:44:01.519267] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3981506 ] 00:05:35.430 [2024-11-27 12:44:01.608621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.430 [2024-11-27 12:44:01.646129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.430 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:36.360 [2024-11-27T11:44:02.745Z] ====================================== 00:05:36.360 [2024-11-27T11:44:02.746Z] busy:2501780710 (cyc) 00:05:36.361 [2024-11-27T11:44:02.746Z] total_run_count: 5595000 00:05:36.361 [2024-11-27T11:44:02.746Z] tsc_hz: 2500000000 (cyc) 00:05:36.361 [2024-11-27T11:44:02.746Z] ====================================== 00:05:36.361 [2024-11-27T11:44:02.746Z] poller_cost: 447 (cyc), 178 (nsec) 00:05:36.361 00:05:36.361 real 0m1.186s 00:05:36.361 user 0m1.103s 00:05:36.361 sys 0m0.080s 00:05:36.361 12:44:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.361 12:44:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.361 ************************************ 00:05:36.361 END TEST thread_poller_perf 00:05:36.361 ************************************ 00:05:36.361 12:44:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:36.361 00:05:36.361 real 0m2.729s 00:05:36.361 user 0m2.380s 00:05:36.361 sys 0m0.369s 00:05:36.361 12:44:02 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.361 12:44:02 thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.361 ************************************ 00:05:36.361 END TEST thread 00:05:36.361 ************************************ 00:05:36.618 12:44:02 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:36.618 12:44:02 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:36.618 12:44:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.618 12:44:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.618 12:44:02 -- common/autotest_common.sh@10 -- # set +x 00:05:36.618 ************************************ 00:05:36.618 START TEST app_cmdline 00:05:36.618 ************************************ 00:05:36.618 12:44:02 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:36.618 * Looking for test storage... 00:05:36.618 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:36.618 12:44:02 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.618 12:44:02 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.618 12:44:02 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.618 12:44:02 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.618 12:44:02 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:36.618 12:44:02 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.618 12:44:02 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.618 --rc genhtml_branch_coverage=1 00:05:36.618 --rc genhtml_function_coverage=1 00:05:36.618 --rc genhtml_legend=1 00:05:36.618 --rc geninfo_all_blocks=1 00:05:36.618 --rc geninfo_unexecuted_blocks=1 00:05:36.618 00:05:36.618 ' 00:05:36.618 12:44:02 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.618 --rc genhtml_branch_coverage=1 00:05:36.618 --rc genhtml_function_coverage=1 00:05:36.618 --rc genhtml_legend=1 00:05:36.618 --rc geninfo_all_blocks=1 00:05:36.618 --rc geninfo_unexecuted_blocks=1 00:05:36.618 00:05:36.618 ' 00:05:36.618 12:44:02 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.618 --rc genhtml_branch_coverage=1 00:05:36.618 --rc genhtml_function_coverage=1 00:05:36.618 --rc genhtml_legend=1 00:05:36.618 --rc geninfo_all_blocks=1 00:05:36.618 --rc geninfo_unexecuted_blocks=1 00:05:36.618 00:05:36.618 ' 00:05:36.618 12:44:02 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.618 --rc genhtml_branch_coverage=1 00:05:36.618 --rc genhtml_function_coverage=1 00:05:36.618 --rc genhtml_legend=1 00:05:36.618 --rc geninfo_all_blocks=1 00:05:36.618 --rc geninfo_unexecuted_blocks=1 00:05:36.618 00:05:36.618 ' 00:05:36.618 12:44:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:36.618 12:44:02 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:36.618 12:44:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3981833 00:05:36.618 12:44:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3981833 00:05:36.618 12:44:02 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3981833 ']' 00:05:36.618 12:44:02 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.618 12:44:02 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.618 12:44:02 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.618 12:44:02 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.618 12:44:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:36.875 [2024-11-27 12:44:03.017527] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:36.875 [2024-11-27 12:44:03.017574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3981833 ] 00:05:36.875 [2024-11-27 12:44:03.106614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.875 [2024-11-27 12:44:03.147996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.132 12:44:03 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.132 12:44:03 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:37.132 12:44:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:37.389 { 00:05:37.389 "version": "SPDK v25.01-pre git sha1 24f0cb4c3", 00:05:37.389 "fields": { 00:05:37.389 "major": 25, 00:05:37.389 "minor": 1, 00:05:37.389 "patch": 0, 00:05:37.389 "suffix": "-pre", 00:05:37.389 "commit": "24f0cb4c3" 00:05:37.389 } 00:05:37.389 } 00:05:37.389 12:44:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:37.389 12:44:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:37.389 12:44:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:37.389 12:44:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:37.389 12:44:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:37.389 12:44:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:37.389 12:44:03 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.389 12:44:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:37.389 12:44:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:37.389 12:44:03 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.389 12:44:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:37.389 12:44:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:37.389 12:44:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:37.389 12:44:03 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:37.389 12:44:03 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:37.389 12:44:03 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:37.389 12:44:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.389 12:44:03 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:37.389 12:44:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.389 12:44:03 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:37.389 12:44:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.389 12:44:03 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:37.389 12:44:03 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:05:37.389 12:44:03 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:37.389 request: 00:05:37.389 { 00:05:37.389 "method": "env_dpdk_get_mem_stats", 00:05:37.389 "req_id": 1 00:05:37.389 } 00:05:37.389 Got JSON-RPC error response 00:05:37.389 response: 00:05:37.389 { 00:05:37.389 "code": -32601, 00:05:37.389 "message": "Method not found" 00:05:37.389 } 00:05:37.645 12:44:03 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:37.645 12:44:03 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:37.645 12:44:03 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:37.645 12:44:03 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:37.645 12:44:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3981833 00:05:37.645 12:44:03 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3981833 ']' 00:05:37.645 12:44:03 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3981833 00:05:37.645 12:44:03 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:37.645 12:44:03 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.645 12:44:03 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3981833 00:05:37.645 12:44:03 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.645 12:44:03 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.645 12:44:03 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3981833' 00:05:37.645 killing process with pid 3981833 00:05:37.645 12:44:03 app_cmdline -- common/autotest_common.sh@973 -- # kill 3981833 00:05:37.645 12:44:03 app_cmdline -- common/autotest_common.sh@978 -- # wait 3981833 00:05:37.901 00:05:37.901 real 0m1.339s 00:05:37.901 user 0m1.513s 00:05:37.901 sys 0m0.498s 00:05:37.901 12:44:04 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.901 12:44:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:37.901 ************************************ 00:05:37.901 END TEST app_cmdline 00:05:37.901 ************************************ 00:05:37.901 12:44:04 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:37.901 12:44:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.901 12:44:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.901 12:44:04 -- common/autotest_common.sh@10 -- # set +x 00:05:37.901 ************************************ 00:05:37.901 START TEST version 00:05:37.901 ************************************ 00:05:37.901 12:44:04 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:38.160 * Looking for test storage... 00:05:38.160 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:38.160 12:44:04 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:38.160 12:44:04 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:38.160 12:44:04 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:38.160 12:44:04 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:38.160 12:44:04 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.160 12:44:04 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.160 12:44:04 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.160 12:44:04 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.160 12:44:04 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.160 12:44:04 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.160 12:44:04 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.160 12:44:04 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.160 12:44:04 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.160 12:44:04 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.160 12:44:04 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.160 12:44:04 version -- scripts/common.sh@344 -- # case "$op" in 00:05:38.160 12:44:04 version -- scripts/common.sh@345 -- # : 1 00:05:38.160 12:44:04 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.160 12:44:04 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.160 12:44:04 version -- scripts/common.sh@365 -- # decimal 1 00:05:38.160 12:44:04 version -- scripts/common.sh@353 -- # local d=1 00:05:38.160 12:44:04 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.160 12:44:04 version -- scripts/common.sh@355 -- # echo 1 00:05:38.160 12:44:04 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.160 12:44:04 version -- scripts/common.sh@366 -- # decimal 2 00:05:38.160 12:44:04 version -- scripts/common.sh@353 -- # local d=2 00:05:38.160 12:44:04 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.160 12:44:04 version -- scripts/common.sh@355 -- # echo 2 00:05:38.160 12:44:04 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.160 12:44:04 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.160 12:44:04 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.160 12:44:04 version -- scripts/common.sh@368 -- # return 0 00:05:38.160 12:44:04 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.160 12:44:04 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:38.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.160 --rc genhtml_branch_coverage=1 00:05:38.160 --rc genhtml_function_coverage=1 00:05:38.160 --rc genhtml_legend=1 00:05:38.160 --rc geninfo_all_blocks=1 00:05:38.160 --rc geninfo_unexecuted_blocks=1 00:05:38.160 00:05:38.160 ' 00:05:38.160 12:44:04 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:38.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.160 --rc genhtml_branch_coverage=1 00:05:38.160 --rc genhtml_function_coverage=1 00:05:38.160 --rc genhtml_legend=1 00:05:38.160 --rc geninfo_all_blocks=1 00:05:38.160 --rc geninfo_unexecuted_blocks=1 00:05:38.160 00:05:38.160 ' 00:05:38.160 12:44:04 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:38.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.160 --rc genhtml_branch_coverage=1 00:05:38.160 --rc genhtml_function_coverage=1 00:05:38.160 --rc genhtml_legend=1 00:05:38.160 --rc geninfo_all_blocks=1 00:05:38.160 --rc geninfo_unexecuted_blocks=1 00:05:38.160 00:05:38.160 ' 00:05:38.160 12:44:04 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:38.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.160 --rc genhtml_branch_coverage=1 00:05:38.160 --rc genhtml_function_coverage=1 00:05:38.160 --rc genhtml_legend=1 00:05:38.160 --rc geninfo_all_blocks=1 00:05:38.160 --rc geninfo_unexecuted_blocks=1 00:05:38.160 00:05:38.160 ' 00:05:38.160 12:44:04 version -- app/version.sh@17 -- # get_header_version major 00:05:38.160 12:44:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:38.160 12:44:04 version -- app/version.sh@14 -- # cut -f2 00:05:38.160 12:44:04 version -- app/version.sh@14 -- # tr -d '"' 00:05:38.160 12:44:04 version -- app/version.sh@17 -- # major=25 00:05:38.160 12:44:04 version -- app/version.sh@18 -- # get_header_version minor 00:05:38.160 12:44:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:38.160 12:44:04 version -- app/version.sh@14 -- # cut -f2 00:05:38.160 12:44:04 version -- app/version.sh@14 -- # tr -d '"' 00:05:38.160 12:44:04 version -- app/version.sh@18 -- # minor=1 00:05:38.160 12:44:04 version -- app/version.sh@19 -- # get_header_version patch 00:05:38.160 12:44:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:38.160 12:44:04 version -- app/version.sh@14 -- # cut -f2 00:05:38.161 12:44:04 version -- app/version.sh@14 -- # tr -d '"' 00:05:38.161 12:44:04 version -- app/version.sh@19 -- # patch=0 00:05:38.161 12:44:04 version -- app/version.sh@20 -- # get_header_version suffix 00:05:38.161 12:44:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:38.161 12:44:04 version -- app/version.sh@14 -- # cut -f2 00:05:38.161 12:44:04 version -- app/version.sh@14 -- # tr -d '"' 00:05:38.161 12:44:04 version -- app/version.sh@20 -- # suffix=-pre 00:05:38.161 12:44:04 version -- app/version.sh@22 -- # version=25.1 00:05:38.161 12:44:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:38.161 12:44:04 version -- app/version.sh@28 -- # version=25.1rc0 00:05:38.161 12:44:04 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:05:38.161 12:44:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:38.161 12:44:04 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:38.161 12:44:04 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:38.161 00:05:38.161 real 0m0.253s 00:05:38.161 user 0m0.145s 00:05:38.161 sys 0m0.157s 00:05:38.161 12:44:04 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.161 12:44:04 version -- common/autotest_common.sh@10 -- # set +x 00:05:38.161 ************************************ 00:05:38.161 END TEST version 00:05:38.161 ************************************ 00:05:38.161 12:44:04 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:38.161 12:44:04 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:38.161 12:44:04 -- spdk/autotest.sh@194 -- # uname -s 00:05:38.161 12:44:04 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:38.161 12:44:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:38.161 12:44:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:38.161 12:44:04 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:38.161 12:44:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:38.161 12:44:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:38.161 12:44:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.161 12:44:04 -- common/autotest_common.sh@10 -- # set +x 00:05:38.417 12:44:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:38.417 12:44:04 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:38.417 12:44:04 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:38.417 12:44:04 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:38.417 12:44:04 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:05:38.418 12:44:04 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:38.418 12:44:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:38.418 12:44:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.418 12:44:04 -- common/autotest_common.sh@10 -- # set +x 00:05:38.418 ************************************ 00:05:38.418 START TEST nvmf_rdma 00:05:38.418 ************************************ 00:05:38.418 12:44:04 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:38.418 * Looking for test storage... 00:05:38.418 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:05:38.418 12:44:04 nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:38.418 12:44:04 nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version 00:05:38.418 12:44:04 nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:38.418 12:44:04 nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.418 12:44:04 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:05:38.418 12:44:04 nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.418 12:44:04 nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:38.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.418 --rc genhtml_branch_coverage=1 00:05:38.418 --rc genhtml_function_coverage=1 00:05:38.418 --rc genhtml_legend=1 00:05:38.418 --rc geninfo_all_blocks=1 00:05:38.418 --rc geninfo_unexecuted_blocks=1 00:05:38.418 00:05:38.418 ' 00:05:38.418 12:44:04 nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:38.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.418 --rc genhtml_branch_coverage=1 00:05:38.418 --rc genhtml_function_coverage=1 00:05:38.418 --rc genhtml_legend=1 00:05:38.418 --rc geninfo_all_blocks=1 00:05:38.418 --rc geninfo_unexecuted_blocks=1 00:05:38.418 00:05:38.418 ' 00:05:38.418 12:44:04 nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:38.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.418 --rc genhtml_branch_coverage=1 00:05:38.418 --rc genhtml_function_coverage=1 00:05:38.418 --rc genhtml_legend=1 00:05:38.418 --rc geninfo_all_blocks=1 00:05:38.418 --rc geninfo_unexecuted_blocks=1 00:05:38.418 00:05:38.418 ' 00:05:38.418 12:44:04 nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:38.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.418 --rc genhtml_branch_coverage=1 00:05:38.418 --rc genhtml_function_coverage=1 00:05:38.418 --rc genhtml_legend=1 00:05:38.418 --rc geninfo_all_blocks=1 00:05:38.418 --rc geninfo_unexecuted_blocks=1 00:05:38.418 00:05:38.418 ' 00:05:38.418 12:44:04 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:05:38.418 12:44:04 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:38.418 12:44:04 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:05:38.418 12:44:04 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:38.418 12:44:04 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.418 12:44:04 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:38.676 ************************************ 00:05:38.676 START TEST nvmf_target_core 00:05:38.676 ************************************ 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:05:38.676 * Looking for test storage... 00:05:38.676 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.676 12:44:04 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:38.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.676 --rc genhtml_branch_coverage=1 00:05:38.676 --rc genhtml_function_coverage=1 00:05:38.676 --rc genhtml_legend=1 00:05:38.676 --rc geninfo_all_blocks=1 00:05:38.676 --rc geninfo_unexecuted_blocks=1 00:05:38.676 00:05:38.676 ' 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:38.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.676 --rc genhtml_branch_coverage=1 00:05:38.676 --rc genhtml_function_coverage=1 00:05:38.676 --rc genhtml_legend=1 00:05:38.676 --rc geninfo_all_blocks=1 00:05:38.676 --rc geninfo_unexecuted_blocks=1 00:05:38.676 00:05:38.676 ' 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:38.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.676 --rc genhtml_branch_coverage=1 00:05:38.676 --rc genhtml_function_coverage=1 00:05:38.676 --rc genhtml_legend=1 00:05:38.676 --rc geninfo_all_blocks=1 00:05:38.676 --rc geninfo_unexecuted_blocks=1 00:05:38.676 00:05:38.676 ' 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:38.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.676 --rc genhtml_branch_coverage=1 00:05:38.676 --rc genhtml_function_coverage=1 00:05:38.676 --rc genhtml_legend=1 00:05:38.676 --rc geninfo_all_blocks=1 00:05:38.676 --rc geninfo_unexecuted_blocks=1 00:05:38.676 00:05:38.676 ' 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.676 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:38.677 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.677 12:44:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:38.935 ************************************ 00:05:38.935 START TEST nvmf_abort 00:05:38.935 ************************************ 00:05:38.935 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:05:38.935 * Looking for test storage... 00:05:38.935 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:38.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.936 --rc genhtml_branch_coverage=1 00:05:38.936 --rc genhtml_function_coverage=1 00:05:38.936 --rc genhtml_legend=1 00:05:38.936 --rc geninfo_all_blocks=1 00:05:38.936 --rc geninfo_unexecuted_blocks=1 00:05:38.936 00:05:38.936 ' 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:38.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.936 --rc genhtml_branch_coverage=1 00:05:38.936 --rc genhtml_function_coverage=1 00:05:38.936 --rc genhtml_legend=1 00:05:38.936 --rc geninfo_all_blocks=1 00:05:38.936 --rc geninfo_unexecuted_blocks=1 00:05:38.936 00:05:38.936 ' 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:38.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.936 --rc genhtml_branch_coverage=1 00:05:38.936 --rc genhtml_function_coverage=1 00:05:38.936 --rc genhtml_legend=1 00:05:38.936 --rc geninfo_all_blocks=1 00:05:38.936 --rc geninfo_unexecuted_blocks=1 00:05:38.936 00:05:38.936 ' 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:38.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.936 --rc genhtml_branch_coverage=1 00:05:38.936 --rc genhtml_function_coverage=1 00:05:38.936 --rc genhtml_legend=1 00:05:38.936 --rc geninfo_all_blocks=1 00:05:38.936 --rc geninfo_unexecuted_blocks=1 00:05:38.936 00:05:38.936 ' 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:38.936 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:38.936 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:38.937 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:38.937 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:38.937 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:38.937 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:38.937 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:38.937 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:38.937 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:39.194 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:39.194 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:39.194 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:39.194 12:44:05 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:47.294 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:47.294 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:47.295 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:47.295 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:47.295 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:47.295 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:47.295 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:47.295 altname enp217s0f0np0 00:05:47.295 altname ens818f0np0 00:05:47.295 inet 192.168.100.8/24 scope global mlx_0_0 00:05:47.295 valid_lft forever preferred_lft forever 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:47.295 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:47.295 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:47.295 altname enp217s0f1np1 00:05:47.295 altname ens818f1np1 00:05:47.295 inet 192.168.100.9/24 scope global mlx_0_1 00:05:47.295 valid_lft forever preferred_lft forever 00:05:47.295 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:47.296 192.168.100.9' 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:47.296 192.168.100.9' 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:47.296 192.168.100.9' 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3986449 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3986449 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3986449 ']' 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.296 12:44:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:47.296 [2024-11-27 12:44:13.647481] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:47.296 [2024-11-27 12:44:13.647532] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:47.554 [2024-11-27 12:44:13.737164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:47.554 [2024-11-27 12:44:13.777192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:47.554 [2024-11-27 12:44:13.777233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:47.554 [2024-11-27 12:44:13.777242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:47.554 [2024-11-27 12:44:13.777250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:47.554 [2024-11-27 12:44:13.777257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:47.554 [2024-11-27 12:44:13.778659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.554 [2024-11-27 12:44:13.778742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.554 [2024-11-27 12:44:13.778743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.119 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.119 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:48.119 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:48.119 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:48.119 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.376 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.377 [2024-11-27 12:44:14.576352] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x126d570/0x1271a60) succeed. 00:05:48.377 [2024-11-27 12:44:14.596486] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x126eb60/0x12b3100) succeed. 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.377 Malloc0 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.377 Delay0 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.377 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.634 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.634 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:05:48.634 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.634 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.634 [2024-11-27 12:44:14.771474] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:48.634 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.634 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:05:48.634 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.634 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:48.634 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.634 12:44:14 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:48.634 [2024-11-27 12:44:14.887297] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:51.157 Initializing NVMe Controllers 00:05:51.157 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:05:51.157 controller IO queue size 128 less than required 00:05:51.157 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:51.157 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:51.157 Initialization complete. Launching workers. 00:05:51.157 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42897 00:05:51.157 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42958, failed to submit 62 00:05:51.157 success 42898, unsuccessful 60, failed 0 00:05:51.157 12:44:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:51.157 12:44:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.157 12:44:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.157 12:44:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.157 12:44:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:51.157 12:44:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:51.157 12:44:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:05:51.157 rmmod nvme_rdma 00:05:51.157 rmmod nvme_fabrics 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3986449 ']' 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3986449 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3986449 ']' 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3986449 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3986449 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3986449' 00:05:51.157 killing process with pid 3986449 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3986449 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3986449 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:05:51.157 00:05:51.157 real 0m12.283s 00:05:51.157 user 0m15.221s 00:05:51.157 sys 0m6.860s 00:05:51.157 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.158 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.158 ************************************ 00:05:51.158 END TEST nvmf_abort 00:05:51.158 ************************************ 00:05:51.158 12:44:17 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:05:51.158 12:44:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:51.158 12:44:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.158 12:44:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:51.158 ************************************ 00:05:51.158 START TEST nvmf_ns_hotplug_stress 00:05:51.158 ************************************ 00:05:51.158 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:05:51.158 * Looking for test storage... 00:05:51.415 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:51.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.415 --rc genhtml_branch_coverage=1 00:05:51.415 --rc genhtml_function_coverage=1 00:05:51.415 --rc genhtml_legend=1 00:05:51.415 --rc geninfo_all_blocks=1 00:05:51.415 --rc geninfo_unexecuted_blocks=1 00:05:51.415 00:05:51.415 ' 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:51.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.415 --rc genhtml_branch_coverage=1 00:05:51.415 --rc genhtml_function_coverage=1 00:05:51.415 --rc genhtml_legend=1 00:05:51.415 --rc geninfo_all_blocks=1 00:05:51.415 --rc geninfo_unexecuted_blocks=1 00:05:51.415 00:05:51.415 ' 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:51.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.415 --rc genhtml_branch_coverage=1 00:05:51.415 --rc genhtml_function_coverage=1 00:05:51.415 --rc genhtml_legend=1 00:05:51.415 --rc geninfo_all_blocks=1 00:05:51.415 --rc geninfo_unexecuted_blocks=1 00:05:51.415 00:05:51.415 ' 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:51.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.415 --rc genhtml_branch_coverage=1 00:05:51.415 --rc genhtml_function_coverage=1 00:05:51.415 --rc genhtml_legend=1 00:05:51.415 --rc geninfo_all_blocks=1 00:05:51.415 --rc geninfo_unexecuted_blocks=1 00:05:51.415 00:05:51.415 ' 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:51.415 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.416 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:51.416 12:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:59.524 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:59.524 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:59.524 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:59.524 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:59.524 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:59.524 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:59.524 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:59.524 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:59.525 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:59.525 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:59.525 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:59.525 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:05:59.525 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:59.784 12:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:59.784 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:59.784 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:59.784 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:59.784 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:59.784 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:59.784 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:59.784 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:59.784 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:59.784 altname enp217s0f0np0 00:05:59.784 altname ens818f0np0 00:05:59.784 inet 192.168.100.8/24 scope global mlx_0_0 00:05:59.784 valid_lft forever preferred_lft forever 00:05:59.784 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:59.784 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:59.785 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:59.785 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:59.785 altname enp217s0f1np1 00:05:59.785 altname ens818f1np1 00:05:59.785 inet 192.168.100.9/24 scope global mlx_0_1 00:05:59.785 valid_lft forever preferred_lft forever 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:59.785 192.168.100.9' 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:59.785 192.168.100.9' 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:59.785 192.168.100.9' 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3991189 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3991189 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3991189 ']' 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.785 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:00.043 [2024-11-27 12:44:26.177535] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:00.043 [2024-11-27 12:44:26.177584] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:00.043 [2024-11-27 12:44:26.268030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.043 [2024-11-27 12:44:26.307360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:00.043 [2024-11-27 12:44:26.307398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:00.043 [2024-11-27 12:44:26.307407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.043 [2024-11-27 12:44:26.307416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.043 [2024-11-27 12:44:26.307423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:00.043 [2024-11-27 12:44:26.308838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.043 [2024-11-27 12:44:26.308914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.043 [2024-11-27 12:44:26.308917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.043 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.043 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:00.043 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:00.043 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:00.043 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:00.301 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:00.301 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:00.301 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:00.301 [2024-11-27 12:44:26.635872] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1653570/0x1657a60) succeed. 00:06:00.301 [2024-11-27 12:44:26.645113] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1654b60/0x1699100) succeed. 00:06:00.559 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:00.815 12:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:00.815 [2024-11-27 12:44:27.143929] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:00.815 12:44:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:01.071 12:44:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:01.327 Malloc0 00:06:01.328 12:44:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:01.584 Delay0 00:06:01.584 12:44:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.584 12:44:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:01.842 NULL1 00:06:01.842 12:44:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:02.099 12:44:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3991552 00:06:02.099 12:44:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:02.099 12:44:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:02.099 12:44:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.479 Read completed with error (sct=0, sc=11) 00:06:03.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.479 12:44:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.480 12:44:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:03.480 12:44:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:03.480 true 00:06:03.736 12:44:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:03.736 12:44:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.556 12:44:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.556 12:44:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:04.556 12:44:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:04.812 true 00:06:04.812 12:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:04.812 12:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.743 12:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.743 12:44:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:05.743 12:44:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:06.000 true 00:06:06.000 12:44:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:06.000 12:44:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.932 12:44:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.932 12:44:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:06.932 12:44:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:07.188 true 00:06:07.189 12:44:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:07.189 12:44:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.119 12:44:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.376 12:44:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:08.376 12:44:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:08.376 true 00:06:08.376 12:44:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:08.376 12:44:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.307 12:44:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.564 12:44:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:09.564 12:44:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:09.564 true 00:06:09.564 12:44:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:09.564 12:44:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.821 12:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.077 12:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:10.077 12:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:10.334 true 00:06:10.334 12:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:10.334 12:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.263 12:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.520 12:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:11.520 12:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:11.777 true 00:06:11.777 12:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:11.777 12:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.708 12:44:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.708 12:44:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:12.708 12:44:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:12.969 true 00:06:12.969 12:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:12.969 12:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.908 12:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.908 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.908 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.908 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.908 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.908 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.908 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.908 12:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:13.908 12:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:14.165 true 00:06:14.165 12:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:14.165 12:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.096 12:44:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.096 12:44:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:15.096 12:44:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:15.353 true 00:06:15.353 12:44:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:15.353 12:44:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.284 12:44:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.284 12:44:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:16.284 12:44:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:16.541 true 00:06:16.541 12:44:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:16.541 12:44:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.471 12:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.471 12:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:17.471 12:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:17.727 true 00:06:17.727 12:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:17.727 12:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.984 12:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.241 12:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:18.241 12:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:18.241 true 00:06:18.241 12:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:18.241 12:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.611 12:44:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.611 12:44:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:19.611 12:44:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:19.868 true 00:06:19.868 12:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:19.868 12:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.798 12:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.798 12:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:20.798 12:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:21.055 true 00:06:21.055 12:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:21.055 12:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.983 12:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.983 12:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:21.983 12:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:22.240 true 00:06:22.240 12:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:22.240 12:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.171 12:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.171 12:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:23.171 12:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:23.428 true 00:06:23.428 12:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:23.428 12:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.459 12:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.459 12:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:24.459 12:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:24.748 true 00:06:24.748 12:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:24.748 12:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.325 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.325 12:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.582 12:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:25.582 12:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:25.838 true 00:06:25.838 12:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:25.839 12:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.095 12:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.351 12:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:26.351 12:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:26.351 true 00:06:26.351 12:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:26.351 12:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.716 12:44:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.716 12:44:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:27.716 12:44:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:27.972 true 00:06:27.972 12:44:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:27.972 12:44:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.900 12:44:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.900 12:44:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:28.900 12:44:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:29.157 true 00:06:29.157 12:44:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:29.157 12:44:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.087 12:44:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.087 12:44:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:30.087 12:44:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:30.344 true 00:06:30.344 12:44:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:30.344 12:44:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.275 12:44:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.275 12:44:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:31.275 12:44:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:31.531 true 00:06:31.531 12:44:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:31.531 12:44:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.461 12:44:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.461 12:44:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:32.461 12:44:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:32.719 true 00:06:32.719 12:44:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:32.719 12:44:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.979 12:44:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.235 12:44:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:33.235 12:44:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:33.235 true 00:06:33.235 12:44:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:33.235 12:44:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.493 12:44:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.751 12:44:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:33.751 12:44:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:33.751 true 00:06:34.008 12:45:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:34.008 12:45:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.008 12:45:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.265 Initializing NVMe Controllers 00:06:34.265 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:34.265 Controller IO queue size 128, less than required. 00:06:34.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:34.265 Controller IO queue size 128, less than required. 00:06:34.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:34.265 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:34.265 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:34.265 Initialization complete. Launching workers. 00:06:34.265 ======================================================== 00:06:34.265 Latency(us) 00:06:34.265 Device Information : IOPS MiB/s Average min max 00:06:34.265 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5317.97 2.60 21736.33 826.07 1007146.70 00:06:34.265 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35591.03 17.38 3596.26 2094.96 285734.66 00:06:34.265 ======================================================== 00:06:34.265 Total : 40909.00 19.98 5954.37 826.07 1007146.70 00:06:34.265 00:06:34.265 12:45:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:34.265 12:45:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:34.522 true 00:06:34.522 12:45:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991552 00:06:34.522 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3991552) - No such process 00:06:34.522 12:45:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3991552 00:06:34.522 12:45:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.779 12:45:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:35.035 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:35.035 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:35.035 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:35.035 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.035 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:35.035 null0 00:06:35.035 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.035 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.035 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:35.291 null1 00:06:35.291 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.291 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.291 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:35.548 null2 00:06:35.548 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.548 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.548 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:35.805 null3 00:06:35.805 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.805 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.805 12:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:35.805 null4 00:06:35.805 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.805 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.805 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:36.062 null5 00:06:36.062 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.062 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.062 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:36.319 null6 00:06:36.319 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.319 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.319 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:36.576 null7 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3997831 3997832 3997833 3997835 3997837 3997839 3997841 3997843 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:36.576 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.577 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:36.577 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.577 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:36.577 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:36.577 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.577 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.577 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:36.834 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:36.834 12:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.834 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.835 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:36.835 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.835 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.835 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:36.835 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.835 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.835 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:37.095 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.095 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.095 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:37.095 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.095 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.095 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:37.095 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:37.095 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:37.095 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:37.095 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:37.095 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:37.095 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.095 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:37.095 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.352 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:37.608 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:37.608 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:37.608 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:37.608 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.608 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:37.608 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:37.608 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:37.608 12:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:37.864 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.864 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.864 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:37.864 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.864 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.864 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:37.864 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.864 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:37.865 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.122 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.378 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:38.379 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:38.379 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:38.379 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.379 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.379 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.379 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.379 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.636 12:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.895 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.152 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.153 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.153 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.153 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.153 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.153 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.153 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.410 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.667 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.667 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.667 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.667 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.667 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.667 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.667 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.667 12:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.924 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.925 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.182 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.439 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.439 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.439 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.439 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.439 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.439 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.439 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.439 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.696 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.696 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.696 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.696 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.696 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.696 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.696 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.696 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.696 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.696 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.696 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.697 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.697 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.697 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.697 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.697 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.697 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:40.697 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:40.697 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:40.697 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:40.697 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:40.697 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:40.697 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:40.697 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:40.697 12:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:40.697 rmmod nvme_rdma 00:06:40.697 rmmod nvme_fabrics 00:06:40.697 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:40.697 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:40.697 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:40.697 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3991189 ']' 00:06:40.697 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3991189 00:06:40.697 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3991189 ']' 00:06:40.697 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3991189 00:06:40.697 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:40.697 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.697 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3991189 00:06:40.954 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:40.954 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:40.954 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3991189' 00:06:40.954 killing process with pid 3991189 00:06:40.954 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3991189 00:06:40.954 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3991189 00:06:40.954 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:40.954 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:40.954 00:06:40.954 real 0m49.865s 00:06:40.954 user 3m20.363s 00:06:40.954 sys 0m15.348s 00:06:40.954 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.954 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:40.954 ************************************ 00:06:40.954 END TEST nvmf_ns_hotplug_stress 00:06:40.954 ************************************ 00:06:41.211 12:45:07 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:06:41.211 12:45:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:41.211 12:45:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.211 12:45:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:41.211 ************************************ 00:06:41.211 START TEST nvmf_delete_subsystem 00:06:41.211 ************************************ 00:06:41.211 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:06:41.211 * Looking for test storage... 00:06:41.211 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:41.211 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:41.211 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:41.211 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:41.211 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:41.211 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.211 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:41.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.212 --rc genhtml_branch_coverage=1 00:06:41.212 --rc genhtml_function_coverage=1 00:06:41.212 --rc genhtml_legend=1 00:06:41.212 --rc geninfo_all_blocks=1 00:06:41.212 --rc geninfo_unexecuted_blocks=1 00:06:41.212 00:06:41.212 ' 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:41.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.212 --rc genhtml_branch_coverage=1 00:06:41.212 --rc genhtml_function_coverage=1 00:06:41.212 --rc genhtml_legend=1 00:06:41.212 --rc geninfo_all_blocks=1 00:06:41.212 --rc geninfo_unexecuted_blocks=1 00:06:41.212 00:06:41.212 ' 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:41.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.212 --rc genhtml_branch_coverage=1 00:06:41.212 --rc genhtml_function_coverage=1 00:06:41.212 --rc genhtml_legend=1 00:06:41.212 --rc geninfo_all_blocks=1 00:06:41.212 --rc geninfo_unexecuted_blocks=1 00:06:41.212 00:06:41.212 ' 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:41.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.212 --rc genhtml_branch_coverage=1 00:06:41.212 --rc genhtml_function_coverage=1 00:06:41.212 --rc genhtml_legend=1 00:06:41.212 --rc geninfo_all_blocks=1 00:06:41.212 --rc geninfo_unexecuted_blocks=1 00:06:41.212 00:06:41.212 ' 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.212 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:41.469 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:41.469 12:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.437 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:51.438 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:51.438 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:51.438 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:51.438 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:51.438 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:51.439 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:51.439 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:51.439 altname enp217s0f0np0 00:06:51.439 altname ens818f0np0 00:06:51.439 inet 192.168.100.8/24 scope global mlx_0_0 00:06:51.439 valid_lft forever preferred_lft forever 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:51.439 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:51.439 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:51.439 altname enp217s0f1np1 00:06:51.439 altname ens818f1np1 00:06:51.439 inet 192.168.100.9/24 scope global mlx_0_1 00:06:51.439 valid_lft forever preferred_lft forever 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:51.439 192.168.100.9' 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:51.439 192.168.100.9' 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:51.439 192.168.100.9' 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=4003368 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 4003368 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 4003368 ']' 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.439 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.440 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.440 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.440 12:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.440 [2024-11-27 12:45:16.339170] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:51.440 [2024-11-27 12:45:16.339222] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.440 [2024-11-27 12:45:16.431243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.440 [2024-11-27 12:45:16.471422] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.440 [2024-11-27 12:45:16.471459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.440 [2024-11-27 12:45:16.471469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.440 [2024-11-27 12:45:16.471478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.440 [2024-11-27 12:45:16.471488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.440 [2024-11-27 12:45:16.472747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.440 [2024-11-27 12:45:16.472750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.440 [2024-11-27 12:45:17.237465] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x232a730/0x232ec20) succeed. 00:06:51.440 [2024-11-27 12:45:17.246202] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x232bc80/0x23702c0) succeed. 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.440 [2024-11-27 12:45:17.332179] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.440 NULL1 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.440 Delay0 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4003434 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:51.440 12:45:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:51.440 [2024-11-27 12:45:17.456385] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:53.334 12:45:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:53.334 12:45:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.335 12:45:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.264 NVMe io qpair process completion error 00:06:54.264 NVMe io qpair process completion error 00:06:54.264 NVMe io qpair process completion error 00:06:54.264 NVMe io qpair process completion error 00:06:54.264 NVMe io qpair process completion error 00:06:54.264 NVMe io qpair process completion error 00:06:54.264 NVMe io qpair process completion error 00:06:54.264 12:45:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.264 12:45:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:54.264 12:45:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4003434 00:06:54.264 12:45:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:54.828 12:45:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:54.828 12:45:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4003434 00:06:54.828 12:45:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Write completed with error (sct=0, sc=8) 00:06:55.394 starting I/O failed: -6 00:06:55.394 Read completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 starting I/O failed: -6 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Write completed with error (sct=0, sc=8) 00:06:55.395 Read completed with error (sct=0, sc=8) 00:06:55.395 Initializing NVMe Controllers 00:06:55.395 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:55.395 Controller IO queue size 128, less than required. 00:06:55.395 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:55.395 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:55.395 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:55.395 Initialization complete. Launching workers. 00:06:55.395 ======================================================== 00:06:55.395 Latency(us) 00:06:55.395 Device Information : IOPS MiB/s Average min max 00:06:55.395 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.40 0.04 1594764.29 1000083.82 2979637.40 00:06:55.395 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.40 0.04 1596304.51 1001553.89 2980507.60 00:06:55.395 ======================================================== 00:06:55.395 Total : 160.80 0.08 1595534.40 1000083.82 2980507.60 00:06:55.395 00:06:55.395 12:45:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:55.395 12:45:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4003434 00:06:55.395 12:45:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:55.395 [2024-11-27 12:45:21.556764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:06:55.395 [2024-11-27 12:45:21.556812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:06:55.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4003434 00:06:55.960 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4003434) - No such process 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4003434 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4003434 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 4003434 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.960 [2024-11-27 12:45:22.072402] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4004261 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4004261 00:06:55.960 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:55.961 [2024-11-27 12:45:22.169572] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:56.217 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:56.217 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4004261 00:06:56.217 12:45:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:56.781 12:45:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:56.781 12:45:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4004261 00:06:56.781 12:45:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:57.346 12:45:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:57.346 12:45:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4004261 00:06:57.346 12:45:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:57.910 12:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:57.910 12:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4004261 00:06:57.910 12:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:58.474 12:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:58.474 12:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4004261 00:06:58.474 12:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:59.040 12:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:59.040 12:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4004261 00:06:59.040 12:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:59.298 12:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:59.298 12:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4004261 00:06:59.298 12:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:59.864 12:45:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:59.864 12:45:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4004261 00:06:59.864 12:45:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:00.428 12:45:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:00.428 12:45:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4004261 00:07:00.428 12:45:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:00.992 12:45:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:00.992 12:45:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4004261 00:07:00.992 12:45:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.557 12:45:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.557 12:45:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4004261 00:07:01.557 12:45:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.813 12:45:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.813 12:45:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4004261 00:07:01.813 12:45:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.376 12:45:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.376 12:45:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4004261 00:07:02.376 12:45:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.946 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.946 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4004261 00:07:02.946 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.946 Initializing NVMe Controllers 00:07:02.946 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:02.946 Controller IO queue size 128, less than required. 00:07:02.946 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:02.946 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:02.946 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:02.946 Initialization complete. Launching workers. 00:07:02.946 ======================================================== 00:07:02.946 Latency(us) 00:07:02.946 Device Information : IOPS MiB/s Average min max 00:07:02.946 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001216.02 1000052.52 1004066.42 00:07:02.946 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002469.89 1000067.17 1005776.51 00:07:02.946 ======================================================== 00:07:02.946 Total : 256.00 0.12 1001842.95 1000052.52 1005776.51 00:07:02.946 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4004261 00:07:03.511 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4004261) - No such process 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4004261 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:03.511 rmmod nvme_rdma 00:07:03.511 rmmod nvme_fabrics 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 4003368 ']' 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 4003368 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 4003368 ']' 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 4003368 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4003368 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4003368' 00:07:03.511 killing process with pid 4003368 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 4003368 00:07:03.511 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 4003368 00:07:03.769 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:03.769 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:03.769 00:07:03.769 real 0m22.607s 00:07:03.769 user 0m50.790s 00:07:03.769 sys 0m7.951s 00:07:03.769 12:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.769 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.769 ************************************ 00:07:03.769 END TEST nvmf_delete_subsystem 00:07:03.769 ************************************ 00:07:03.769 12:45:30 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:07:03.769 12:45:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.769 12:45:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.769 12:45:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:03.769 ************************************ 00:07:03.769 START TEST nvmf_host_management 00:07:03.769 ************************************ 00:07:03.769 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:07:04.027 * Looking for test storage... 00:07:04.027 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.027 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:04.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.028 --rc genhtml_branch_coverage=1 00:07:04.028 --rc genhtml_function_coverage=1 00:07:04.028 --rc genhtml_legend=1 00:07:04.028 --rc geninfo_all_blocks=1 00:07:04.028 --rc geninfo_unexecuted_blocks=1 00:07:04.028 00:07:04.028 ' 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:04.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.028 --rc genhtml_branch_coverage=1 00:07:04.028 --rc genhtml_function_coverage=1 00:07:04.028 --rc genhtml_legend=1 00:07:04.028 --rc geninfo_all_blocks=1 00:07:04.028 --rc geninfo_unexecuted_blocks=1 00:07:04.028 00:07:04.028 ' 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:04.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.028 --rc genhtml_branch_coverage=1 00:07:04.028 --rc genhtml_function_coverage=1 00:07:04.028 --rc genhtml_legend=1 00:07:04.028 --rc geninfo_all_blocks=1 00:07:04.028 --rc geninfo_unexecuted_blocks=1 00:07:04.028 00:07:04.028 ' 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:04.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.028 --rc genhtml_branch_coverage=1 00:07:04.028 --rc genhtml_function_coverage=1 00:07:04.028 --rc genhtml_legend=1 00:07:04.028 --rc geninfo_all_blocks=1 00:07:04.028 --rc geninfo_unexecuted_blocks=1 00:07:04.028 00:07:04.028 ' 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:04.028 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:04.028 12:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:13.998 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:13.999 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:13.999 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:13.999 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:13.999 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:13.999 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:13.999 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:14.000 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:14.000 altname enp217s0f0np0 00:07:14.000 altname ens818f0np0 00:07:14.000 inet 192.168.100.8/24 scope global mlx_0_0 00:07:14.000 valid_lft forever preferred_lft forever 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:14.000 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:14.000 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:14.000 altname enp217s0f1np1 00:07:14.000 altname ens818f1np1 00:07:14.000 inet 192.168.100.9/24 scope global mlx_0_1 00:07:14.000 valid_lft forever preferred_lft forever 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:14.000 192.168.100.9' 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:14.000 192.168.100.9' 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:14.000 192.168.100.9' 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=4009765 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 4009765 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4009765 ']' 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.000 12:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.000 [2024-11-27 12:45:38.986946] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:14.000 [2024-11-27 12:45:38.987005] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.001 [2024-11-27 12:45:39.081366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.001 [2024-11-27 12:45:39.122834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.001 [2024-11-27 12:45:39.122876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.001 [2024-11-27 12:45:39.122886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.001 [2024-11-27 12:45:39.122894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.001 [2024-11-27 12:45:39.122901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.001 [2024-11-27 12:45:39.124644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.001 [2024-11-27 12:45:39.124728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.001 [2024-11-27 12:45:39.124837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.001 [2024-11-27 12:45:39.124839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:14.001 12:45:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.001 12:45:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:14.001 12:45:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:14.001 12:45:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:14.001 12:45:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.001 12:45:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.001 12:45:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:14.001 12:45:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.001 12:45:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.001 [2024-11-27 12:45:39.902755] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8ec0f0/0x8f05e0) succeed. 00:07:14.001 [2024-11-27 12:45:39.912681] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8ed780/0x931c80) succeed. 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.001 Malloc0 00:07:14.001 [2024-11-27 12:45:40.108519] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4010063 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4010063 /var/tmp/bdevperf.sock 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4010063 ']' 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:14.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:14.001 { 00:07:14.001 "params": { 00:07:14.001 "name": "Nvme$subsystem", 00:07:14.001 "trtype": "$TEST_TRANSPORT", 00:07:14.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:14.001 "adrfam": "ipv4", 00:07:14.001 "trsvcid": "$NVMF_PORT", 00:07:14.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:14.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:14.001 "hdgst": ${hdgst:-false}, 00:07:14.001 "ddgst": ${ddgst:-false} 00:07:14.001 }, 00:07:14.001 "method": "bdev_nvme_attach_controller" 00:07:14.001 } 00:07:14.001 EOF 00:07:14.001 )") 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:14.001 12:45:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:14.001 "params": { 00:07:14.001 "name": "Nvme0", 00:07:14.001 "trtype": "rdma", 00:07:14.001 "traddr": "192.168.100.8", 00:07:14.001 "adrfam": "ipv4", 00:07:14.001 "trsvcid": "4420", 00:07:14.001 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:14.001 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:14.001 "hdgst": false, 00:07:14.001 "ddgst": false 00:07:14.001 }, 00:07:14.001 "method": "bdev_nvme_attach_controller" 00:07:14.001 }' 00:07:14.001 [2024-11-27 12:45:40.213189] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:14.001 [2024-11-27 12:45:40.213237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010063 ] 00:07:14.001 [2024-11-27 12:45:40.304500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.001 [2024-11-27 12:45:40.344016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.258 Running I/O for 10 seconds... 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1707 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1707 -ge 100 ']' 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:14.824 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:14.825 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.825 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.825 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.825 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:14.825 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.825 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.825 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.825 12:45:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:15.953 1832.00 IOPS, 114.50 MiB/s [2024-11-27T11:45:42.338Z] [2024-11-27 12:45:42.141237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c4f300 len:0x10000 key:0x182000 00:07:15.953 [2024-11-27 12:45:42.141272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c3f280 len:0x10000 key:0x182000 00:07:15.953 [2024-11-27 12:45:42.141302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c2f200 len:0x10000 key:0x182000 00:07:15.953 [2024-11-27 12:45:42.141323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c1f180 len:0x10000 key:0x182000 00:07:15.953 [2024-11-27 12:45:42.141343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c0f100 len:0x10000 key:0x182000 00:07:15.953 [2024-11-27 12:45:42.141363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ff0000 len:0x10000 key:0x181f00 00:07:15.953 [2024-11-27 12:45:42.141382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000fdff80 len:0x10000 key:0x181f00 00:07:15.953 [2024-11-27 12:45:42.141403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bf0000 len:0x10000 key:0x182100 00:07:15.953 [2024-11-27 12:45:42.141423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bdff80 len:0x10000 key:0x182100 00:07:15.953 [2024-11-27 12:45:42.141444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bcff00 len:0x10000 key:0x182100 00:07:15.953 [2024-11-27 12:45:42.141469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bbfe80 len:0x10000 key:0x182100 00:07:15.953 [2024-11-27 12:45:42.141489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bafe00 len:0x10000 key:0x182100 00:07:15.953 [2024-11-27 12:45:42.141512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b9fd80 len:0x10000 key:0x182100 00:07:15.953 [2024-11-27 12:45:42.141533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b8fd00 len:0x10000 key:0x182100 00:07:15.953 [2024-11-27 12:45:42.141554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b7fc80 len:0x10000 key:0x182100 00:07:15.953 [2024-11-27 12:45:42.141573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b6fc00 len:0x10000 key:0x182100 00:07:15.953 [2024-11-27 12:45:42.141597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b5fb80 len:0x10000 key:0x182100 00:07:15.953 [2024-11-27 12:45:42.141622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b4fb00 len:0x10000 key:0x182100 00:07:15.953 [2024-11-27 12:45:42.141642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b3fa80 len:0x10000 key:0x182100 00:07:15.953 [2024-11-27 12:45:42.141661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.953 [2024-11-27 12:45:42.141672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b2fa00 len:0x10000 key:0x182100 00:07:15.953 [2024-11-27 12:45:42.141683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.141696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b1f980 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.141707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.141718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b0f900 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.141727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.141740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aff880 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.141752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.141764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aef800 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.141773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.141784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000adf780 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.141793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.141804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000acf700 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.141813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.141824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000abf680 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.141833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.141843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aaf600 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.141853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.141865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a9f580 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.141874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.141884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a8f500 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.141893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.141904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a7f480 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.141913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.141925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a6f400 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.141934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.141945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a5f380 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.141954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.141965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a4f300 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.141975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.141985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a3f280 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.141994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a2f200 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.142015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a1f180 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.142034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a0f100 len:0x10000 key:0x182100 00:07:15.954 [2024-11-27 12:45:42.142054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000df0000 len:0x10000 key:0x182000 00:07:15.954 [2024-11-27 12:45:42.142075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ddff80 len:0x10000 key:0x182000 00:07:15.954 [2024-11-27 12:45:42.142096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dcff00 len:0x10000 key:0x182000 00:07:15.954 [2024-11-27 12:45:42.142116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dbfe80 len:0x10000 key:0x182000 00:07:15.954 [2024-11-27 12:45:42.142135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dafe00 len:0x10000 key:0x182000 00:07:15.954 [2024-11-27 12:45:42.142160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d9fd80 len:0x10000 key:0x182000 00:07:15.954 [2024-11-27 12:45:42.142180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d8fd00 len:0x10000 key:0x182000 00:07:15.954 [2024-11-27 12:45:42.142200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d7fc80 len:0x10000 key:0x182000 00:07:15.954 [2024-11-27 12:45:42.142219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d6fc00 len:0x10000 key:0x182000 00:07:15.954 [2024-11-27 12:45:42.142239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d5fb80 len:0x10000 key:0x182000 00:07:15.954 [2024-11-27 12:45:42.142259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000088b4000 len:0x10000 key:0x182b00 00:07:15.954 [2024-11-27 12:45:42.142279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000088d5000 len:0x10000 key:0x182b00 00:07:15.954 [2024-11-27 12:45:42.142299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c2f000 len:0x10000 key:0x182b00 00:07:15.954 [2024-11-27 12:45:42.142319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c0e000 len:0x10000 key:0x182b00 00:07:15.954 [2024-11-27 12:45:42.142339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bed000 len:0x10000 key:0x182b00 00:07:15.954 [2024-11-27 12:45:42.142358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bcc000 len:0x10000 key:0x182b00 00:07:15.954 [2024-11-27 12:45:42.142380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bab000 len:0x10000 key:0x182b00 00:07:15.954 [2024-11-27 12:45:42.142399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.954 [2024-11-27 12:45:42.142410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b8a000 len:0x10000 key:0x182b00 00:07:15.954 [2024-11-27 12:45:42.142419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.955 [2024-11-27 12:45:42.142430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b98f000 len:0x10000 key:0x182b00 00:07:15.955 [2024-11-27 12:45:42.142438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.955 [2024-11-27 12:45:42.142449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b96e000 len:0x10000 key:0x182b00 00:07:15.955 [2024-11-27 12:45:42.142458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.955 [2024-11-27 12:45:42.142468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b94d000 len:0x10000 key:0x182b00 00:07:15.955 [2024-11-27 12:45:42.142478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.955 [2024-11-27 12:45:42.142490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b92c000 len:0x10000 key:0x182b00 00:07:15.955 [2024-11-27 12:45:42.142499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.955 [2024-11-27 12:45:42.142510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b90b000 len:0x10000 key:0x182b00 00:07:15.955 [2024-11-27 12:45:42.142519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.955 [2024-11-27 12:45:42.142529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8ea000 len:0x10000 key:0x182b00 00:07:15.955 [2024-11-27 12:45:42.142538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.955 [2024-11-27 12:45:42.142549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8c9000 len:0x10000 key:0x182b00 00:07:15.955 [2024-11-27 12:45:42.142558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.955 [2024-11-27 12:45:42.142569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8a8000 len:0x10000 key:0x182b00 00:07:15.955 [2024-11-27 12:45:42.142578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84428000 sqhd:7210 p:0 m:0 dnr:0 00:07:15.955 [2024-11-27 12:45:42.145437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:15.955 task offset: 105472 on job bdev=Nvme0n1 fails 00:07:15.955 00:07:15.955 Latency(us) 00:07:15.955 [2024-11-27T11:45:42.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.955 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:15.955 Job: Nvme0n1 ended in about 1.61 seconds with error 00:07:15.955 Verification LBA range: start 0x0 length 0x400 00:07:15.955 Nvme0n1 : 1.61 1138.39 71.15 39.77 0.00 53814.45 2267.55 1020054.73 00:07:15.955 [2024-11-27T11:45:42.340Z] =================================================================================================================== 00:07:15.955 [2024-11-27T11:45:42.340Z] Total : 1138.39 71.15 39.77 0.00 53814.45 2267.55 1020054.73 00:07:15.955 [2024-11-27 12:45:42.147917] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.955 12:45:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4010063 00:07:15.955 12:45:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:15.955 12:45:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:15.955 12:45:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:15.955 12:45:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:15.955 12:45:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:15.955 12:45:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:15.955 12:45:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:15.955 { 00:07:15.955 "params": { 00:07:15.955 "name": "Nvme$subsystem", 00:07:15.955 "trtype": "$TEST_TRANSPORT", 00:07:15.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:15.955 "adrfam": "ipv4", 00:07:15.955 "trsvcid": "$NVMF_PORT", 00:07:15.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:15.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:15.955 "hdgst": ${hdgst:-false}, 00:07:15.955 "ddgst": ${ddgst:-false} 00:07:15.955 }, 00:07:15.955 "method": "bdev_nvme_attach_controller" 00:07:15.955 } 00:07:15.955 EOF 00:07:15.955 )") 00:07:15.955 12:45:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:15.955 12:45:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:15.955 12:45:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:15.955 12:45:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:15.955 "params": { 00:07:15.955 "name": "Nvme0", 00:07:15.955 "trtype": "rdma", 00:07:15.955 "traddr": "192.168.100.8", 00:07:15.955 "adrfam": "ipv4", 00:07:15.955 "trsvcid": "4420", 00:07:15.955 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:15.955 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:15.955 "hdgst": false, 00:07:15.955 "ddgst": false 00:07:15.955 }, 00:07:15.955 "method": "bdev_nvme_attach_controller" 00:07:15.955 }' 00:07:15.955 [2024-11-27 12:45:42.202025] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:15.955 [2024-11-27 12:45:42.202074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010417 ] 00:07:15.955 [2024-11-27 12:45:42.292505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.955 [2024-11-27 12:45:42.331951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.212 Running I/O for 1 seconds... 00:07:17.583 3112.00 IOPS, 194.50 MiB/s 00:07:17.583 Latency(us) 00:07:17.583 [2024-11-27T11:45:43.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.583 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:17.583 Verification LBA range: start 0x0 length 0x400 00:07:17.583 Nvme0n1 : 1.01 3130.10 195.63 0.00 0.00 20032.39 593.10 39426.46 00:07:17.583 [2024-11-27T11:45:43.968Z] =================================================================================================================== 00:07:17.583 [2024-11-27T11:45:43.968Z] Total : 3130.10 195.63 0.00 0.00 20032.39 593.10 39426.46 00:07:17.583 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 4010063 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:17.583 rmmod nvme_rdma 00:07:17.583 rmmod nvme_fabrics 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 4009765 ']' 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 4009765 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 4009765 ']' 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 4009765 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4009765 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4009765' 00:07:17.583 killing process with pid 4009765 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 4009765 00:07:17.583 12:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 4009765 00:07:17.842 [2024-11-27 12:45:44.079427] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:17.842 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:17.842 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:17.842 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:17.842 00:07:17.842 real 0m14.020s 00:07:17.842 user 0m25.737s 00:07:17.842 sys 0m7.694s 00:07:17.842 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.842 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.842 ************************************ 00:07:17.842 END TEST nvmf_host_management 00:07:17.842 ************************************ 00:07:17.842 12:45:44 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:07:17.842 12:45:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:17.842 12:45:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.842 12:45:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:17.842 ************************************ 00:07:17.842 START TEST nvmf_lvol 00:07:17.842 ************************************ 00:07:17.842 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:07:18.101 * Looking for test storage... 00:07:18.101 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:18.101 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:18.101 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:18.101 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:18.101 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:18.101 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.101 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.101 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.101 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.101 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.101 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.101 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:18.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.102 --rc genhtml_branch_coverage=1 00:07:18.102 --rc genhtml_function_coverage=1 00:07:18.102 --rc genhtml_legend=1 00:07:18.102 --rc geninfo_all_blocks=1 00:07:18.102 --rc geninfo_unexecuted_blocks=1 00:07:18.102 00:07:18.102 ' 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:18.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.102 --rc genhtml_branch_coverage=1 00:07:18.102 --rc genhtml_function_coverage=1 00:07:18.102 --rc genhtml_legend=1 00:07:18.102 --rc geninfo_all_blocks=1 00:07:18.102 --rc geninfo_unexecuted_blocks=1 00:07:18.102 00:07:18.102 ' 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:18.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.102 --rc genhtml_branch_coverage=1 00:07:18.102 --rc genhtml_function_coverage=1 00:07:18.102 --rc genhtml_legend=1 00:07:18.102 --rc geninfo_all_blocks=1 00:07:18.102 --rc geninfo_unexecuted_blocks=1 00:07:18.102 00:07:18.102 ' 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:18.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.102 --rc genhtml_branch_coverage=1 00:07:18.102 --rc genhtml_function_coverage=1 00:07:18.102 --rc genhtml_legend=1 00:07:18.102 --rc geninfo_all_blocks=1 00:07:18.102 --rc geninfo_unexecuted_blocks=1 00:07:18.102 00:07:18.102 ' 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.102 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:18.102 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:18.103 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:18.103 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:18.103 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:18.103 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:18.103 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:18.103 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.103 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:18.103 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:18.103 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:18.103 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.103 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.103 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.103 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:18.103 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:18.103 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.103 12:45:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:28.072 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.072 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:28.072 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:28.072 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:28.072 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:28.072 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:28.072 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:28.073 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:28.073 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:28.073 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:28.073 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:28.073 12:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:28.073 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:28.073 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:28.073 altname enp217s0f0np0 00:07:28.073 altname ens818f0np0 00:07:28.073 inet 192.168.100.8/24 scope global mlx_0_0 00:07:28.073 valid_lft forever preferred_lft forever 00:07:28.073 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:28.074 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:28.074 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:28.074 altname enp217s0f1np1 00:07:28.074 altname ens818f1np1 00:07:28.074 inet 192.168.100.9/24 scope global mlx_0_1 00:07:28.074 valid_lft forever preferred_lft forever 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:28.074 192.168.100.9' 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:28.074 192.168.100.9' 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:28.074 192.168.100.9' 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=4015018 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 4015018 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 4015018 ']' 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.074 12:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:28.074 [2024-11-27 12:45:53.233892] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:28.074 [2024-11-27 12:45:53.233940] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.074 [2024-11-27 12:45:53.321733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.074 [2024-11-27 12:45:53.359152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.074 [2024-11-27 12:45:53.359191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.074 [2024-11-27 12:45:53.359200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.074 [2024-11-27 12:45:53.359208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.074 [2024-11-27 12:45:53.359215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.074 [2024-11-27 12:45:53.360714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.074 [2024-11-27 12:45:53.360812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.074 [2024-11-27 12:45:53.360815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.074 12:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.074 12:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:28.074 12:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:28.074 12:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:28.074 12:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:28.074 12:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.074 12:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:28.074 [2024-11-27 12:45:54.286566] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x858270/0x85c760) succeed. 00:07:28.074 [2024-11-27 12:45:54.295629] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x859860/0x89de00) succeed. 00:07:28.074 12:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:28.333 12:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:28.333 12:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:28.591 12:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:28.591 12:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:28.848 12:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:29.106 12:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=52da6cc2-ffb4-4be5-950c-994154302379 00:07:29.106 12:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 52da6cc2-ffb4-4be5-950c-994154302379 lvol 20 00:07:29.106 12:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ee134aca-ed64-4a15-99db-2ef6e29a03ae 00:07:29.106 12:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:29.365 12:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ee134aca-ed64-4a15-99db-2ef6e29a03ae 00:07:29.623 12:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:29.623 [2024-11-27 12:45:55.989995] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:29.882 12:45:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:29.882 12:45:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4015576 00:07:29.882 12:45:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:29.882 12:45:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:31.253 12:45:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ee134aca-ed64-4a15-99db-2ef6e29a03ae MY_SNAPSHOT 00:07:31.253 12:45:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=06869ccb-30ee-43f5-ae23-ddd9347fa1f5 00:07:31.253 12:45:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ee134aca-ed64-4a15-99db-2ef6e29a03ae 30 00:07:31.510 12:45:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 06869ccb-30ee-43f5-ae23-ddd9347fa1f5 MY_CLONE 00:07:31.510 12:45:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6f1b038f-6d63-47da-a77e-cabafb9641c5 00:07:31.510 12:45:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6f1b038f-6d63-47da-a77e-cabafb9641c5 00:07:31.767 12:45:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4015576 00:07:41.875 Initializing NVMe Controllers 00:07:41.875 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:41.875 Controller IO queue size 128, less than required. 00:07:41.875 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:41.875 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:41.875 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:41.875 Initialization complete. Launching workers. 00:07:41.875 ======================================================== 00:07:41.875 Latency(us) 00:07:41.875 Device Information : IOPS MiB/s Average min max 00:07:41.875 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16372.10 63.95 7819.67 2000.71 37259.74 00:07:41.875 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16306.30 63.70 7850.69 3944.66 47911.95 00:07:41.875 ======================================================== 00:07:41.875 Total : 32678.40 127.65 7835.15 2000.71 47911.95 00:07:41.875 00:07:41.875 12:46:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:41.875 12:46:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ee134aca-ed64-4a15-99db-2ef6e29a03ae 00:07:41.875 12:46:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 52da6cc2-ffb4-4be5-950c-994154302379 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:41.875 rmmod nvme_rdma 00:07:41.875 rmmod nvme_fabrics 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 4015018 ']' 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 4015018 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 4015018 ']' 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 4015018 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.875 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4015018 00:07:42.134 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.134 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.134 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4015018' 00:07:42.134 killing process with pid 4015018 00:07:42.134 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 4015018 00:07:42.134 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 4015018 00:07:42.393 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:42.393 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:42.393 00:07:42.393 real 0m24.348s 00:07:42.393 user 1m12.820s 00:07:42.393 sys 0m7.855s 00:07:42.393 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.393 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.393 ************************************ 00:07:42.393 END TEST nvmf_lvol 00:07:42.393 ************************************ 00:07:42.393 12:46:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:07:42.393 12:46:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.393 12:46:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.393 12:46:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.393 ************************************ 00:07:42.393 START TEST nvmf_lvs_grow 00:07:42.393 ************************************ 00:07:42.393 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:07:42.393 * Looking for test storage... 00:07:42.393 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:42.393 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:42.393 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:42.393 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:42.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.653 --rc genhtml_branch_coverage=1 00:07:42.653 --rc genhtml_function_coverage=1 00:07:42.653 --rc genhtml_legend=1 00:07:42.653 --rc geninfo_all_blocks=1 00:07:42.653 --rc geninfo_unexecuted_blocks=1 00:07:42.653 00:07:42.653 ' 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:42.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.653 --rc genhtml_branch_coverage=1 00:07:42.653 --rc genhtml_function_coverage=1 00:07:42.653 --rc genhtml_legend=1 00:07:42.653 --rc geninfo_all_blocks=1 00:07:42.653 --rc geninfo_unexecuted_blocks=1 00:07:42.653 00:07:42.653 ' 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:42.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.653 --rc genhtml_branch_coverage=1 00:07:42.653 --rc genhtml_function_coverage=1 00:07:42.653 --rc genhtml_legend=1 00:07:42.653 --rc geninfo_all_blocks=1 00:07:42.653 --rc geninfo_unexecuted_blocks=1 00:07:42.653 00:07:42.653 ' 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:42.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.653 --rc genhtml_branch_coverage=1 00:07:42.653 --rc genhtml_function_coverage=1 00:07:42.653 --rc genhtml_legend=1 00:07:42.653 --rc geninfo_all_blocks=1 00:07:42.653 --rc geninfo_unexecuted_blocks=1 00:07:42.653 00:07:42.653 ' 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.653 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:42.654 12:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:50.765 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.765 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:50.766 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:50.766 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:50.766 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:50.766 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:50.766 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:51.028 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:51.028 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:51.028 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:51.028 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:51.028 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:51.028 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:51.028 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:51.028 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:51.028 altname enp217s0f0np0 00:07:51.028 altname ens818f0np0 00:07:51.028 inet 192.168.100.8/24 scope global mlx_0_0 00:07:51.028 valid_lft forever preferred_lft forever 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:51.029 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:51.029 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:51.029 altname enp217s0f1np1 00:07:51.029 altname ens818f1np1 00:07:51.029 inet 192.168.100.9/24 scope global mlx_0_1 00:07:51.029 valid_lft forever preferred_lft forever 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:51.029 192.168.100.9' 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:51.029 192.168.100.9' 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:51.029 192.168.100.9' 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=4021697 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 4021697 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 4021697 ']' 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.029 12:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.029 [2024-11-27 12:46:17.351901] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:51.029 [2024-11-27 12:46:17.351953] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.288 [2024-11-27 12:46:17.443546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.288 [2024-11-27 12:46:17.483177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.288 [2024-11-27 12:46:17.483214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.288 [2024-11-27 12:46:17.483224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.288 [2024-11-27 12:46:17.483232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.288 [2024-11-27 12:46:17.483238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.288 [2024-11-27 12:46:17.483850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.854 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.854 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:51.854 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:51.854 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:51.854 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.854 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.854 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:52.113 [2024-11-27 12:46:18.419301] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xcbbb80/0xcc0070) succeed. 00:07:52.113 [2024-11-27 12:46:18.428791] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xcbd030/0xd01710) succeed. 00:07:52.113 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:52.113 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.113 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.113 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.372 ************************************ 00:07:52.372 START TEST lvs_grow_clean 00:07:52.372 ************************************ 00:07:52.372 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:52.372 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:52.372 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:52.372 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:52.372 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:52.372 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:52.372 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:52.372 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:52.372 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:52.372 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:52.372 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:52.372 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:52.630 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e87e4fa3-3025-49b8-9716-ab68a6160c27 00:07:52.630 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:52.630 12:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e87e4fa3-3025-49b8-9716-ab68a6160c27 00:07:52.889 12:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:52.889 12:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:52.889 12:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e87e4fa3-3025-49b8-9716-ab68a6160c27 lvol 150 00:07:53.146 12:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=bf20b497-0827-475e-b75b-276e7d57807c 00:07:53.146 12:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:53.146 12:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:53.146 [2024-11-27 12:46:19.452180] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:53.146 [2024-11-27 12:46:19.452236] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:53.146 true 00:07:53.146 12:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:53.146 12:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e87e4fa3-3025-49b8-9716-ab68a6160c27 00:07:53.404 12:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:53.404 12:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:53.663 12:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bf20b497-0827-475e-b75b-276e7d57807c 00:07:53.663 12:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:53.922 [2024-11-27 12:46:20.206638] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:53.922 12:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:54.181 12:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:54.181 12:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4022274 00:07:54.181 12:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:54.181 12:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4022274 /var/tmp/bdevperf.sock 00:07:54.181 12:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 4022274 ']' 00:07:54.181 12:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:54.181 12:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.181 12:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:54.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:54.181 12:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.181 12:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:54.181 [2024-11-27 12:46:20.451036] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:54.181 [2024-11-27 12:46:20.451085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4022274 ] 00:07:54.181 [2024-11-27 12:46:20.540009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.440 [2024-11-27 12:46:20.580009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.440 12:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.440 12:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:54.440 12:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:54.698 Nvme0n1 00:07:54.698 12:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:54.957 [ 00:07:54.957 { 00:07:54.957 "name": "Nvme0n1", 00:07:54.957 "aliases": [ 00:07:54.957 "bf20b497-0827-475e-b75b-276e7d57807c" 00:07:54.957 ], 00:07:54.957 "product_name": "NVMe disk", 00:07:54.957 "block_size": 4096, 00:07:54.957 "num_blocks": 38912, 00:07:54.957 "uuid": "bf20b497-0827-475e-b75b-276e7d57807c", 00:07:54.957 "numa_id": 1, 00:07:54.957 "assigned_rate_limits": { 00:07:54.957 "rw_ios_per_sec": 0, 00:07:54.957 "rw_mbytes_per_sec": 0, 00:07:54.957 "r_mbytes_per_sec": 0, 00:07:54.957 "w_mbytes_per_sec": 0 00:07:54.957 }, 00:07:54.957 "claimed": false, 00:07:54.957 "zoned": false, 00:07:54.957 "supported_io_types": { 00:07:54.957 "read": true, 00:07:54.957 "write": true, 00:07:54.957 "unmap": true, 00:07:54.957 "flush": true, 00:07:54.957 "reset": true, 00:07:54.957 "nvme_admin": true, 00:07:54.957 "nvme_io": true, 00:07:54.957 "nvme_io_md": false, 00:07:54.957 "write_zeroes": true, 00:07:54.957 "zcopy": false, 00:07:54.957 "get_zone_info": false, 00:07:54.957 "zone_management": false, 00:07:54.957 "zone_append": false, 00:07:54.957 "compare": true, 00:07:54.957 "compare_and_write": true, 00:07:54.957 "abort": true, 00:07:54.957 "seek_hole": false, 00:07:54.957 "seek_data": false, 00:07:54.957 "copy": true, 00:07:54.957 "nvme_iov_md": false 00:07:54.957 }, 00:07:54.957 "memory_domains": [ 00:07:54.957 { 00:07:54.957 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:07:54.957 "dma_device_type": 0 00:07:54.957 } 00:07:54.957 ], 00:07:54.957 "driver_specific": { 00:07:54.957 "nvme": [ 00:07:54.957 { 00:07:54.957 "trid": { 00:07:54.957 "trtype": "RDMA", 00:07:54.957 "adrfam": "IPv4", 00:07:54.957 "traddr": "192.168.100.8", 00:07:54.957 "trsvcid": "4420", 00:07:54.957 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:54.957 }, 00:07:54.957 "ctrlr_data": { 00:07:54.957 "cntlid": 1, 00:07:54.957 "vendor_id": "0x8086", 00:07:54.957 "model_number": "SPDK bdev Controller", 00:07:54.957 "serial_number": "SPDK0", 00:07:54.957 "firmware_revision": "25.01", 00:07:54.957 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:54.957 "oacs": { 00:07:54.957 "security": 0, 00:07:54.957 "format": 0, 00:07:54.957 "firmware": 0, 00:07:54.957 "ns_manage": 0 00:07:54.957 }, 00:07:54.957 "multi_ctrlr": true, 00:07:54.957 "ana_reporting": false 00:07:54.957 }, 00:07:54.957 "vs": { 00:07:54.957 "nvme_version": "1.3" 00:07:54.957 }, 00:07:54.957 "ns_data": { 00:07:54.957 "id": 1, 00:07:54.957 "can_share": true 00:07:54.957 } 00:07:54.957 } 00:07:54.957 ], 00:07:54.957 "mp_policy": "active_passive" 00:07:54.957 } 00:07:54.957 } 00:07:54.957 ] 00:07:54.957 12:46:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:54.957 12:46:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4022517 00:07:54.957 12:46:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:54.957 Running I/O for 10 seconds... 00:07:55.892 Latency(us) 00:07:55.892 [2024-11-27T11:46:22.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.892 Nvme0n1 : 1.00 34848.00 136.12 0.00 0.00 0.00 0.00 0.00 00:07:55.892 [2024-11-27T11:46:22.277Z] =================================================================================================================== 00:07:55.892 [2024-11-27T11:46:22.277Z] Total : 34848.00 136.12 0.00 0.00 0.00 0.00 0.00 00:07:55.892 00:07:56.826 12:46:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e87e4fa3-3025-49b8-9716-ab68a6160c27 00:07:57.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.085 Nvme0n1 : 2.00 35086.00 137.05 0.00 0.00 0.00 0.00 0.00 00:07:57.085 [2024-11-27T11:46:23.470Z] =================================================================================================================== 00:07:57.085 [2024-11-27T11:46:23.470Z] Total : 35086.00 137.05 0.00 0.00 0.00 0.00 0.00 00:07:57.085 00:07:57.085 true 00:07:57.085 12:46:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:57.085 12:46:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e87e4fa3-3025-49b8-9716-ab68a6160c27 00:07:57.343 12:46:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:57.343 12:46:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:57.343 12:46:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4022517 00:07:57.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.909 Nvme0n1 : 3.00 35166.33 137.37 0.00 0.00 0.00 0.00 0.00 00:07:57.909 [2024-11-27T11:46:24.294Z] =================================================================================================================== 00:07:57.909 [2024-11-27T11:46:24.294Z] Total : 35166.33 137.37 0.00 0.00 0.00 0.00 0.00 00:07:57.909 00:07:59.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.284 Nvme0n1 : 4.00 35273.75 137.79 0.00 0.00 0.00 0.00 0.00 00:07:59.284 [2024-11-27T11:46:25.669Z] =================================================================================================================== 00:07:59.284 [2024-11-27T11:46:25.669Z] Total : 35273.75 137.79 0.00 0.00 0.00 0.00 0.00 00:07:59.284 00:08:00.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.218 Nvme0n1 : 5.00 35348.20 138.08 0.00 0.00 0.00 0.00 0.00 00:08:00.218 [2024-11-27T11:46:26.603Z] =================================================================================================================== 00:08:00.218 [2024-11-27T11:46:26.603Z] Total : 35348.20 138.08 0.00 0.00 0.00 0.00 0.00 00:08:00.218 00:08:01.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.152 Nvme0n1 : 6.00 35408.67 138.32 0.00 0.00 0.00 0.00 0.00 00:08:01.152 [2024-11-27T11:46:27.537Z] =================================================================================================================== 00:08:01.152 [2024-11-27T11:46:27.537Z] Total : 35408.67 138.32 0.00 0.00 0.00 0.00 0.00 00:08:01.152 00:08:02.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.086 Nvme0n1 : 7.00 35450.29 138.48 0.00 0.00 0.00 0.00 0.00 00:08:02.086 [2024-11-27T11:46:28.471Z] =================================================================================================================== 00:08:02.086 [2024-11-27T11:46:28.471Z] Total : 35450.29 138.48 0.00 0.00 0.00 0.00 0.00 00:08:02.086 00:08:03.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.020 Nvme0n1 : 8.00 35483.12 138.61 0.00 0.00 0.00 0.00 0.00 00:08:03.020 [2024-11-27T11:46:29.406Z] =================================================================================================================== 00:08:03.021 [2024-11-27T11:46:29.406Z] Total : 35483.12 138.61 0.00 0.00 0.00 0.00 0.00 00:08:03.021 00:08:03.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.954 Nvme0n1 : 9.00 35473.00 138.57 0.00 0.00 0.00 0.00 0.00 00:08:03.954 [2024-11-27T11:46:30.339Z] =================================================================================================================== 00:08:03.954 [2024-11-27T11:46:30.339Z] Total : 35473.00 138.57 0.00 0.00 0.00 0.00 0.00 00:08:03.954 00:08:04.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.890 Nvme0n1 : 10.00 35472.60 138.56 0.00 0.00 0.00 0.00 0.00 00:08:04.890 [2024-11-27T11:46:31.275Z] =================================================================================================================== 00:08:04.890 [2024-11-27T11:46:31.275Z] Total : 35472.60 138.56 0.00 0.00 0.00 0.00 0.00 00:08:04.890 00:08:04.890 00:08:04.890 Latency(us) 00:08:04.890 [2024-11-27T11:46:31.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.890 Nvme0n1 : 10.00 35473.19 138.57 0.00 0.00 3605.37 2359.30 9489.61 00:08:04.890 [2024-11-27T11:46:31.275Z] =================================================================================================================== 00:08:04.890 [2024-11-27T11:46:31.275Z] Total : 35473.19 138.57 0.00 0.00 3605.37 2359.30 9489.61 00:08:04.890 { 00:08:04.890 "results": [ 00:08:04.890 { 00:08:04.890 "job": "Nvme0n1", 00:08:04.890 "core_mask": "0x2", 00:08:04.890 "workload": "randwrite", 00:08:04.890 "status": "finished", 00:08:04.890 "queue_depth": 128, 00:08:04.890 "io_size": 4096, 00:08:04.890 "runtime": 10.003048, 00:08:04.890 "iops": 35473.18777236698, 00:08:04.890 "mibps": 138.56713973580852, 00:08:04.890 "io_failed": 0, 00:08:04.890 "io_timeout": 0, 00:08:04.890 "avg_latency_us": 3605.3698076338633, 00:08:04.890 "min_latency_us": 2359.296, 00:08:04.890 "max_latency_us": 9489.6128 00:08:04.890 } 00:08:04.890 ], 00:08:04.890 "core_count": 1 00:08:04.890 } 00:08:05.148 12:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4022274 00:08:05.148 12:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 4022274 ']' 00:08:05.148 12:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 4022274 00:08:05.148 12:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:05.148 12:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.148 12:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4022274 00:08:05.148 12:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:05.148 12:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:05.148 12:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4022274' 00:08:05.148 killing process with pid 4022274 00:08:05.148 12:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 4022274 00:08:05.148 Received shutdown signal, test time was about 10.000000 seconds 00:08:05.148 00:08:05.148 Latency(us) 00:08:05.148 [2024-11-27T11:46:31.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.148 [2024-11-27T11:46:31.533Z] =================================================================================================================== 00:08:05.148 [2024-11-27T11:46:31.533Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:05.148 12:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 4022274 00:08:05.148 12:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:05.407 12:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:05.665 12:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:05.665 12:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e87e4fa3-3025-49b8-9716-ab68a6160c27 00:08:05.924 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:05.924 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:05.924 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:05.924 [2024-11-27 12:46:32.302350] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e87e4fa3-3025-49b8-9716-ab68a6160c27 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e87e4fa3-3025-49b8-9716-ab68a6160c27 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e87e4fa3-3025-49b8-9716-ab68a6160c27 00:08:06.183 request: 00:08:06.183 { 00:08:06.183 "uuid": "e87e4fa3-3025-49b8-9716-ab68a6160c27", 00:08:06.183 "method": "bdev_lvol_get_lvstores", 00:08:06.183 "req_id": 1 00:08:06.183 } 00:08:06.183 Got JSON-RPC error response 00:08:06.183 response: 00:08:06.183 { 00:08:06.183 "code": -19, 00:08:06.183 "message": "No such device" 00:08:06.183 } 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:06.183 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:06.441 aio_bdev 00:08:06.441 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bf20b497-0827-475e-b75b-276e7d57807c 00:08:06.441 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=bf20b497-0827-475e-b75b-276e7d57807c 00:08:06.441 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:06.441 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:06.441 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:06.441 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:06.441 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:06.699 12:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bf20b497-0827-475e-b75b-276e7d57807c -t 2000 00:08:06.699 [ 00:08:06.699 { 00:08:06.699 "name": "bf20b497-0827-475e-b75b-276e7d57807c", 00:08:06.699 "aliases": [ 00:08:06.699 "lvs/lvol" 00:08:06.699 ], 00:08:06.699 "product_name": "Logical Volume", 00:08:06.699 "block_size": 4096, 00:08:06.699 "num_blocks": 38912, 00:08:06.699 "uuid": "bf20b497-0827-475e-b75b-276e7d57807c", 00:08:06.699 "assigned_rate_limits": { 00:08:06.699 "rw_ios_per_sec": 0, 00:08:06.699 "rw_mbytes_per_sec": 0, 00:08:06.699 "r_mbytes_per_sec": 0, 00:08:06.699 "w_mbytes_per_sec": 0 00:08:06.699 }, 00:08:06.699 "claimed": false, 00:08:06.699 "zoned": false, 00:08:06.699 "supported_io_types": { 00:08:06.699 "read": true, 00:08:06.699 "write": true, 00:08:06.699 "unmap": true, 00:08:06.699 "flush": false, 00:08:06.699 "reset": true, 00:08:06.699 "nvme_admin": false, 00:08:06.699 "nvme_io": false, 00:08:06.699 "nvme_io_md": false, 00:08:06.699 "write_zeroes": true, 00:08:06.699 "zcopy": false, 00:08:06.699 "get_zone_info": false, 00:08:06.699 "zone_management": false, 00:08:06.699 "zone_append": false, 00:08:06.699 "compare": false, 00:08:06.699 "compare_and_write": false, 00:08:06.699 "abort": false, 00:08:06.699 "seek_hole": true, 00:08:06.699 "seek_data": true, 00:08:06.699 "copy": false, 00:08:06.699 "nvme_iov_md": false 00:08:06.699 }, 00:08:06.699 "driver_specific": { 00:08:06.699 "lvol": { 00:08:06.699 "lvol_store_uuid": "e87e4fa3-3025-49b8-9716-ab68a6160c27", 00:08:06.699 "base_bdev": "aio_bdev", 00:08:06.699 "thin_provision": false, 00:08:06.699 "num_allocated_clusters": 38, 00:08:06.699 "snapshot": false, 00:08:06.699 "clone": false, 00:08:06.699 "esnap_clone": false 00:08:06.699 } 00:08:06.699 } 00:08:06.699 } 00:08:06.699 ] 00:08:06.699 12:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:06.699 12:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e87e4fa3-3025-49b8-9716-ab68a6160c27 00:08:06.699 12:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:06.958 12:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:06.958 12:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e87e4fa3-3025-49b8-9716-ab68a6160c27 00:08:06.958 12:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:07.218 12:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:07.218 12:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bf20b497-0827-475e-b75b-276e7d57807c 00:08:07.476 12:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e87e4fa3-3025-49b8-9716-ab68a6160c27 00:08:07.476 12:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:07.735 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.735 00:08:07.735 real 0m15.531s 00:08:07.735 user 0m15.330s 00:08:07.735 sys 0m1.142s 00:08:07.735 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.735 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:07.735 ************************************ 00:08:07.735 END TEST lvs_grow_clean 00:08:07.735 ************************************ 00:08:07.735 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:07.735 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:07.735 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.735 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:07.994 ************************************ 00:08:07.994 START TEST lvs_grow_dirty 00:08:07.994 ************************************ 00:08:07.994 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:07.994 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:07.994 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:07.994 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:07.994 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:07.994 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:07.994 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:07.994 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.994 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.994 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:07.994 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:07.994 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:08.253 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0 00:08:08.253 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0 00:08:08.253 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:08.512 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:08.512 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:08.512 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0 lvol 150 00:08:08.512 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=aae6fb86-4239-4f17-a67b-b91f0fdc1d04 00:08:08.512 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:08.771 12:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:08.771 [2024-11-27 12:46:35.072322] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:08.771 [2024-11-27 12:46:35.072378] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:08.771 true 00:08:08.771 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0 00:08:08.771 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:09.030 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:09.030 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:09.288 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aae6fb86-4239-4f17-a67b-b91f0fdc1d04 00:08:09.288 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:09.548 [2024-11-27 12:46:35.774690] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:09.548 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:09.809 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4025006 00:08:09.809 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:09.809 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:09.809 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4025006 /var/tmp/bdevperf.sock 00:08:09.809 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4025006 ']' 00:08:09.809 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:09.809 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.809 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:09.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:09.809 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.809 12:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.809 [2024-11-27 12:46:36.005414] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:09.809 [2024-11-27 12:46:36.005463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4025006 ] 00:08:09.809 [2024-11-27 12:46:36.091797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.809 [2024-11-27 12:46:36.130200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.068 12:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.068 12:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:10.068 12:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:10.326 Nvme0n1 00:08:10.326 12:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:10.326 [ 00:08:10.326 { 00:08:10.326 "name": "Nvme0n1", 00:08:10.326 "aliases": [ 00:08:10.326 "aae6fb86-4239-4f17-a67b-b91f0fdc1d04" 00:08:10.326 ], 00:08:10.326 "product_name": "NVMe disk", 00:08:10.326 "block_size": 4096, 00:08:10.326 "num_blocks": 38912, 00:08:10.326 "uuid": "aae6fb86-4239-4f17-a67b-b91f0fdc1d04", 00:08:10.326 "numa_id": 1, 00:08:10.326 "assigned_rate_limits": { 00:08:10.326 "rw_ios_per_sec": 0, 00:08:10.326 "rw_mbytes_per_sec": 0, 00:08:10.326 "r_mbytes_per_sec": 0, 00:08:10.326 "w_mbytes_per_sec": 0 00:08:10.326 }, 00:08:10.326 "claimed": false, 00:08:10.326 "zoned": false, 00:08:10.326 "supported_io_types": { 00:08:10.326 "read": true, 00:08:10.326 "write": true, 00:08:10.326 "unmap": true, 00:08:10.326 "flush": true, 00:08:10.326 "reset": true, 00:08:10.326 "nvme_admin": true, 00:08:10.326 "nvme_io": true, 00:08:10.326 "nvme_io_md": false, 00:08:10.326 "write_zeroes": true, 00:08:10.326 "zcopy": false, 00:08:10.326 "get_zone_info": false, 00:08:10.326 "zone_management": false, 00:08:10.326 "zone_append": false, 00:08:10.326 "compare": true, 00:08:10.326 "compare_and_write": true, 00:08:10.326 "abort": true, 00:08:10.326 "seek_hole": false, 00:08:10.326 "seek_data": false, 00:08:10.326 "copy": true, 00:08:10.326 "nvme_iov_md": false 00:08:10.326 }, 00:08:10.326 "memory_domains": [ 00:08:10.326 { 00:08:10.326 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:08:10.326 "dma_device_type": 0 00:08:10.326 } 00:08:10.326 ], 00:08:10.326 "driver_specific": { 00:08:10.326 "nvme": [ 00:08:10.326 { 00:08:10.326 "trid": { 00:08:10.326 "trtype": "RDMA", 00:08:10.326 "adrfam": "IPv4", 00:08:10.326 "traddr": "192.168.100.8", 00:08:10.326 "trsvcid": "4420", 00:08:10.326 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:10.326 }, 00:08:10.326 "ctrlr_data": { 00:08:10.326 "cntlid": 1, 00:08:10.326 "vendor_id": "0x8086", 00:08:10.326 "model_number": "SPDK bdev Controller", 00:08:10.326 "serial_number": "SPDK0", 00:08:10.326 "firmware_revision": "25.01", 00:08:10.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:10.326 "oacs": { 00:08:10.326 "security": 0, 00:08:10.326 "format": 0, 00:08:10.326 "firmware": 0, 00:08:10.326 "ns_manage": 0 00:08:10.326 }, 00:08:10.326 "multi_ctrlr": true, 00:08:10.326 "ana_reporting": false 00:08:10.326 }, 00:08:10.326 "vs": { 00:08:10.326 "nvme_version": "1.3" 00:08:10.326 }, 00:08:10.326 "ns_data": { 00:08:10.326 "id": 1, 00:08:10.326 "can_share": true 00:08:10.326 } 00:08:10.326 } 00:08:10.326 ], 00:08:10.326 "mp_policy": "active_passive" 00:08:10.326 } 00:08:10.326 } 00:08:10.326 ] 00:08:10.326 12:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4025272 00:08:10.326 12:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:10.326 12:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:10.584 Running I/O for 10 seconds... 00:08:11.518 Latency(us) 00:08:11.518 [2024-11-27T11:46:37.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.518 Nvme0n1 : 1.00 34591.00 135.12 0.00 0.00 0.00 0.00 0.00 00:08:11.518 [2024-11-27T11:46:37.903Z] =================================================================================================================== 00:08:11.518 [2024-11-27T11:46:37.903Z] Total : 34591.00 135.12 0.00 0.00 0.00 0.00 0.00 00:08:11.518 00:08:12.452 12:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0 00:08:12.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.452 Nvme0n1 : 2.00 34976.50 136.63 0.00 0.00 0.00 0.00 0.00 00:08:12.452 [2024-11-27T11:46:38.837Z] =================================================================================================================== 00:08:12.452 [2024-11-27T11:46:38.837Z] Total : 34976.50 136.63 0.00 0.00 0.00 0.00 0.00 00:08:12.452 00:08:12.709 true 00:08:12.709 12:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0 00:08:12.709 12:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:12.709 12:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:12.709 12:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:12.709 12:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4025272 00:08:13.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.643 Nvme0n1 : 3.00 35060.33 136.95 0.00 0.00 0.00 0.00 0.00 00:08:13.643 [2024-11-27T11:46:40.028Z] =================================================================================================================== 00:08:13.643 [2024-11-27T11:46:40.028Z] Total : 35060.33 136.95 0.00 0.00 0.00 0.00 0.00 00:08:13.643 00:08:14.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.575 Nvme0n1 : 4.00 35022.75 136.81 0.00 0.00 0.00 0.00 0.00 00:08:14.575 [2024-11-27T11:46:40.960Z] =================================================================================================================== 00:08:14.575 [2024-11-27T11:46:40.960Z] Total : 35022.75 136.81 0.00 0.00 0.00 0.00 0.00 00:08:14.575 00:08:15.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.508 Nvme0n1 : 5.00 35147.40 137.29 0.00 0.00 0.00 0.00 0.00 00:08:15.508 [2024-11-27T11:46:41.893Z] =================================================================================================================== 00:08:15.508 [2024-11-27T11:46:41.893Z] Total : 35147.40 137.29 0.00 0.00 0.00 0.00 0.00 00:08:15.508 00:08:16.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.440 Nvme0n1 : 6.00 35232.83 137.63 0.00 0.00 0.00 0.00 0.00 00:08:16.440 [2024-11-27T11:46:42.825Z] =================================================================================================================== 00:08:16.440 [2024-11-27T11:46:42.826Z] Total : 35232.83 137.63 0.00 0.00 0.00 0.00 0.00 00:08:16.441 00:08:17.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.814 Nvme0n1 : 7.00 35295.14 137.87 0.00 0.00 0.00 0.00 0.00 00:08:17.814 [2024-11-27T11:46:44.199Z] =================================================================================================================== 00:08:17.814 [2024-11-27T11:46:44.199Z] Total : 35295.14 137.87 0.00 0.00 0.00 0.00 0.00 00:08:17.814 00:08:18.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.748 Nvme0n1 : 8.00 35332.62 138.02 0.00 0.00 0.00 0.00 0.00 00:08:18.748 [2024-11-27T11:46:45.133Z] =================================================================================================================== 00:08:18.748 [2024-11-27T11:46:45.133Z] Total : 35332.62 138.02 0.00 0.00 0.00 0.00 0.00 00:08:18.748 00:08:19.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.682 Nvme0n1 : 9.00 35369.89 138.16 0.00 0.00 0.00 0.00 0.00 00:08:19.682 [2024-11-27T11:46:46.067Z] =================================================================================================================== 00:08:19.682 [2024-11-27T11:46:46.067Z] Total : 35369.89 138.16 0.00 0.00 0.00 0.00 0.00 00:08:19.682 00:08:20.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.617 Nvme0n1 : 10.00 35414.70 138.34 0.00 0.00 0.00 0.00 0.00 00:08:20.617 [2024-11-27T11:46:47.002Z] =================================================================================================================== 00:08:20.617 [2024-11-27T11:46:47.002Z] Total : 35414.70 138.34 0.00 0.00 0.00 0.00 0.00 00:08:20.617 00:08:20.617 00:08:20.617 Latency(us) 00:08:20.617 [2024-11-27T11:46:47.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.617 Nvme0n1 : 10.00 35413.96 138.34 0.00 0.00 3611.47 2319.97 15623.78 00:08:20.617 [2024-11-27T11:46:47.002Z] =================================================================================================================== 00:08:20.617 [2024-11-27T11:46:47.002Z] Total : 35413.96 138.34 0.00 0.00 3611.47 2319.97 15623.78 00:08:20.617 { 00:08:20.617 "results": [ 00:08:20.617 { 00:08:20.617 "job": "Nvme0n1", 00:08:20.617 "core_mask": "0x2", 00:08:20.617 "workload": "randwrite", 00:08:20.617 "status": "finished", 00:08:20.617 "queue_depth": 128, 00:08:20.617 "io_size": 4096, 00:08:20.617 "runtime": 10.002948, 00:08:20.617 "iops": 35413.959964602436, 00:08:20.617 "mibps": 138.33578111172827, 00:08:20.617 "io_failed": 0, 00:08:20.617 "io_timeout": 0, 00:08:20.617 "avg_latency_us": 3611.4711307415228, 00:08:20.617 "min_latency_us": 2319.9744, 00:08:20.617 "max_latency_us": 15623.7824 00:08:20.617 } 00:08:20.617 ], 00:08:20.617 "core_count": 1 00:08:20.617 } 00:08:20.617 12:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4025006 00:08:20.617 12:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 4025006 ']' 00:08:20.617 12:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 4025006 00:08:20.617 12:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:20.617 12:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.617 12:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4025006 00:08:20.617 12:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:20.617 12:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:20.617 12:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4025006' 00:08:20.617 killing process with pid 4025006 00:08:20.617 12:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 4025006 00:08:20.617 Received shutdown signal, test time was about 10.000000 seconds 00:08:20.617 00:08:20.617 Latency(us) 00:08:20.617 [2024-11-27T11:46:47.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.617 [2024-11-27T11:46:47.002Z] =================================================================================================================== 00:08:20.617 [2024-11-27T11:46:47.002Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:20.617 12:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 4025006 00:08:20.877 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:20.877 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:21.136 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0 00:08:21.136 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:21.395 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:21.395 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:21.395 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4021697 00:08:21.395 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4021697 00:08:21.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4021697 Killed "${NVMF_APP[@]}" "$@" 00:08:21.396 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:21.396 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:21.396 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:21.396 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.396 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:21.396 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=4027147 00:08:21.396 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 4027147 00:08:21.396 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4027147 ']' 00:08:21.396 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.396 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.396 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.396 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.396 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:21.396 12:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:21.396 [2024-11-27 12:46:47.713763] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:21.396 [2024-11-27 12:46:47.713815] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.655 [2024-11-27 12:46:47.805423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.655 [2024-11-27 12:46:47.844709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.655 [2024-11-27 12:46:47.844743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.655 [2024-11-27 12:46:47.844752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.655 [2024-11-27 12:46:47.844760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.655 [2024-11-27 12:46:47.844767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.655 [2024-11-27 12:46:47.845381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.224 12:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.224 12:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:22.224 12:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:22.224 12:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:22.224 12:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.224 12:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.224 12:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.484 [2024-11-27 12:46:48.741670] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:22.484 [2024-11-27 12:46:48.741766] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:22.484 [2024-11-27 12:46:48.741795] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:22.484 12:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:22.484 12:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev aae6fb86-4239-4f17-a67b-b91f0fdc1d04 00:08:22.484 12:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=aae6fb86-4239-4f17-a67b-b91f0fdc1d04 00:08:22.484 12:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.484 12:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:22.484 12:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.484 12:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.484 12:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:22.743 12:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aae6fb86-4239-4f17-a67b-b91f0fdc1d04 -t 2000 00:08:22.743 [ 00:08:22.743 { 00:08:22.743 "name": "aae6fb86-4239-4f17-a67b-b91f0fdc1d04", 00:08:22.743 "aliases": [ 00:08:22.743 "lvs/lvol" 00:08:22.743 ], 00:08:22.743 "product_name": "Logical Volume", 00:08:22.743 "block_size": 4096, 00:08:22.743 "num_blocks": 38912, 00:08:22.743 "uuid": "aae6fb86-4239-4f17-a67b-b91f0fdc1d04", 00:08:22.743 "assigned_rate_limits": { 00:08:22.743 "rw_ios_per_sec": 0, 00:08:22.743 "rw_mbytes_per_sec": 0, 00:08:22.743 "r_mbytes_per_sec": 0, 00:08:22.743 "w_mbytes_per_sec": 0 00:08:22.743 }, 00:08:22.743 "claimed": false, 00:08:22.743 "zoned": false, 00:08:22.743 "supported_io_types": { 00:08:22.743 "read": true, 00:08:22.743 "write": true, 00:08:22.743 "unmap": true, 00:08:22.743 "flush": false, 00:08:22.743 "reset": true, 00:08:22.743 "nvme_admin": false, 00:08:22.743 "nvme_io": false, 00:08:22.743 "nvme_io_md": false, 00:08:22.743 "write_zeroes": true, 00:08:22.743 "zcopy": false, 00:08:22.743 "get_zone_info": false, 00:08:22.743 "zone_management": false, 00:08:22.743 "zone_append": false, 00:08:22.743 "compare": false, 00:08:22.743 "compare_and_write": false, 00:08:22.743 "abort": false, 00:08:22.743 "seek_hole": true, 00:08:22.743 "seek_data": true, 00:08:22.743 "copy": false, 00:08:22.743 "nvme_iov_md": false 00:08:22.743 }, 00:08:22.743 "driver_specific": { 00:08:22.743 "lvol": { 00:08:22.743 "lvol_store_uuid": "bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0", 00:08:22.743 "base_bdev": "aio_bdev", 00:08:22.743 "thin_provision": false, 00:08:22.743 "num_allocated_clusters": 38, 00:08:22.743 "snapshot": false, 00:08:22.743 "clone": false, 00:08:22.743 "esnap_clone": false 00:08:22.743 } 00:08:22.743 } 00:08:22.743 } 00:08:22.743 ] 00:08:22.743 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:22.743 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0 00:08:22.743 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:23.013 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:23.013 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0 00:08:23.014 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:23.281 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:23.281 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:23.281 [2024-11-27 12:46:49.646250] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:23.540 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0 00:08:23.540 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:23.540 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0 00:08:23.541 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:23.541 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.541 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:23.541 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.541 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:23.541 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.541 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:23.541 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:23.541 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0 00:08:23.541 request: 00:08:23.541 { 00:08:23.541 "uuid": "bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0", 00:08:23.541 "method": "bdev_lvol_get_lvstores", 00:08:23.541 "req_id": 1 00:08:23.541 } 00:08:23.541 Got JSON-RPC error response 00:08:23.541 response: 00:08:23.541 { 00:08:23.541 "code": -19, 00:08:23.541 "message": "No such device" 00:08:23.541 } 00:08:23.541 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:23.541 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:23.541 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:23.541 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:23.541 12:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.799 aio_bdev 00:08:23.799 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aae6fb86-4239-4f17-a67b-b91f0fdc1d04 00:08:23.799 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=aae6fb86-4239-4f17-a67b-b91f0fdc1d04 00:08:23.799 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.799 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:23.799 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.799 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.799 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:24.058 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aae6fb86-4239-4f17-a67b-b91f0fdc1d04 -t 2000 00:08:24.058 [ 00:08:24.058 { 00:08:24.058 "name": "aae6fb86-4239-4f17-a67b-b91f0fdc1d04", 00:08:24.058 "aliases": [ 00:08:24.058 "lvs/lvol" 00:08:24.058 ], 00:08:24.058 "product_name": "Logical Volume", 00:08:24.058 "block_size": 4096, 00:08:24.058 "num_blocks": 38912, 00:08:24.058 "uuid": "aae6fb86-4239-4f17-a67b-b91f0fdc1d04", 00:08:24.058 "assigned_rate_limits": { 00:08:24.058 "rw_ios_per_sec": 0, 00:08:24.058 "rw_mbytes_per_sec": 0, 00:08:24.058 "r_mbytes_per_sec": 0, 00:08:24.058 "w_mbytes_per_sec": 0 00:08:24.058 }, 00:08:24.058 "claimed": false, 00:08:24.058 "zoned": false, 00:08:24.058 "supported_io_types": { 00:08:24.059 "read": true, 00:08:24.059 "write": true, 00:08:24.059 "unmap": true, 00:08:24.059 "flush": false, 00:08:24.059 "reset": true, 00:08:24.059 "nvme_admin": false, 00:08:24.059 "nvme_io": false, 00:08:24.059 "nvme_io_md": false, 00:08:24.059 "write_zeroes": true, 00:08:24.059 "zcopy": false, 00:08:24.059 "get_zone_info": false, 00:08:24.059 "zone_management": false, 00:08:24.059 "zone_append": false, 00:08:24.059 "compare": false, 00:08:24.059 "compare_and_write": false, 00:08:24.059 "abort": false, 00:08:24.059 "seek_hole": true, 00:08:24.059 "seek_data": true, 00:08:24.059 "copy": false, 00:08:24.059 "nvme_iov_md": false 00:08:24.059 }, 00:08:24.059 "driver_specific": { 00:08:24.059 "lvol": { 00:08:24.059 "lvol_store_uuid": "bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0", 00:08:24.059 "base_bdev": "aio_bdev", 00:08:24.059 "thin_provision": false, 00:08:24.059 "num_allocated_clusters": 38, 00:08:24.059 "snapshot": false, 00:08:24.059 "clone": false, 00:08:24.059 "esnap_clone": false 00:08:24.059 } 00:08:24.059 } 00:08:24.059 } 00:08:24.059 ] 00:08:24.059 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:24.059 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0 00:08:24.059 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:24.318 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:24.318 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0 00:08:24.318 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:24.578 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:24.578 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aae6fb86-4239-4f17-a67b-b91f0fdc1d04 00:08:24.578 12:46:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bb3ea1ef-6f8a-4b84-a354-edfc8ec66ce0 00:08:24.837 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:25.096 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.096 00:08:25.096 real 0m17.221s 00:08:25.096 user 0m44.604s 00:08:25.096 sys 0m3.170s 00:08:25.096 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.096 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:25.096 ************************************ 00:08:25.097 END TEST lvs_grow_dirty 00:08:25.097 ************************************ 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:25.097 nvmf_trace.0 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:25.097 rmmod nvme_rdma 00:08:25.097 rmmod nvme_fabrics 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 4027147 ']' 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 4027147 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 4027147 ']' 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 4027147 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.097 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4027147 00:08:25.376 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.376 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.376 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4027147' 00:08:25.376 killing process with pid 4027147 00:08:25.376 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 4027147 00:08:25.376 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 4027147 00:08:25.376 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:25.376 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:25.376 00:08:25.376 real 0m43.071s 00:08:25.376 user 1m6.759s 00:08:25.376 sys 0m11.320s 00:08:25.376 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.376 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.376 ************************************ 00:08:25.376 END TEST nvmf_lvs_grow 00:08:25.376 ************************************ 00:08:25.376 12:46:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:08:25.376 12:46:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:25.376 12:46:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.376 12:46:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:25.637 ************************************ 00:08:25.637 START TEST nvmf_bdev_io_wait 00:08:25.637 ************************************ 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:08:25.637 * Looking for test storage... 00:08:25.637 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:25.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.637 --rc genhtml_branch_coverage=1 00:08:25.637 --rc genhtml_function_coverage=1 00:08:25.637 --rc genhtml_legend=1 00:08:25.637 --rc geninfo_all_blocks=1 00:08:25.637 --rc geninfo_unexecuted_blocks=1 00:08:25.637 00:08:25.637 ' 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:25.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.637 --rc genhtml_branch_coverage=1 00:08:25.637 --rc genhtml_function_coverage=1 00:08:25.637 --rc genhtml_legend=1 00:08:25.637 --rc geninfo_all_blocks=1 00:08:25.637 --rc geninfo_unexecuted_blocks=1 00:08:25.637 00:08:25.637 ' 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:25.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.637 --rc genhtml_branch_coverage=1 00:08:25.637 --rc genhtml_function_coverage=1 00:08:25.637 --rc genhtml_legend=1 00:08:25.637 --rc geninfo_all_blocks=1 00:08:25.637 --rc geninfo_unexecuted_blocks=1 00:08:25.637 00:08:25.637 ' 00:08:25.637 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:25.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.637 --rc genhtml_branch_coverage=1 00:08:25.638 --rc genhtml_function_coverage=1 00:08:25.638 --rc genhtml_legend=1 00:08:25.638 --rc geninfo_all_blocks=1 00:08:25.638 --rc geninfo_unexecuted_blocks=1 00:08:25.638 00:08:25.638 ' 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:25.638 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.638 12:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.638 12:46:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:25.638 12:46:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:25.638 12:46:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:25.638 12:46:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:35.626 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:35.626 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:35.626 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:35.626 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:35.626 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:35.627 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:35.627 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:35.627 altname enp217s0f0np0 00:08:35.627 altname ens818f0np0 00:08:35.627 inet 192.168.100.8/24 scope global mlx_0_0 00:08:35.627 valid_lft forever preferred_lft forever 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:35.627 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:35.627 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:35.627 altname enp217s0f1np1 00:08:35.627 altname ens818f1np1 00:08:35.627 inet 192.168.100.9/24 scope global mlx_0_1 00:08:35.627 valid_lft forever preferred_lft forever 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:35.627 192.168.100.9' 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:35.627 192.168.100.9' 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:35.627 192.168.100.9' 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=4031925 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 4031925 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 4031925 ']' 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.627 12:47:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.627 [2024-11-27 12:47:00.477372] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:35.627 [2024-11-27 12:47:00.477427] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.628 [2024-11-27 12:47:00.566909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.628 [2024-11-27 12:47:00.610565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.628 [2024-11-27 12:47:00.610599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.628 [2024-11-27 12:47:00.610613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.628 [2024-11-27 12:47:00.610622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.628 [2024-11-27 12:47:00.610629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.628 [2024-11-27 12:47:00.612420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.628 [2024-11-27 12:47:00.612514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.628 [2024-11-27 12:47:00.612599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.628 [2024-11-27 12:47:00.612601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.628 [2024-11-27 12:47:01.468219] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdfbe60/0xe00350) succeed. 00:08:35.628 [2024-11-27 12:47:01.477666] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdfd4f0/0xe419f0) succeed. 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.628 Malloc0 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.628 [2024-11-27 12:47:01.667174] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4032212 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4032214 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:35.628 { 00:08:35.628 "params": { 00:08:35.628 "name": "Nvme$subsystem", 00:08:35.628 "trtype": "$TEST_TRANSPORT", 00:08:35.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.628 "adrfam": "ipv4", 00:08:35.628 "trsvcid": "$NVMF_PORT", 00:08:35.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.628 "hdgst": ${hdgst:-false}, 00:08:35.628 "ddgst": ${ddgst:-false} 00:08:35.628 }, 00:08:35.628 "method": "bdev_nvme_attach_controller" 00:08:35.628 } 00:08:35.628 EOF 00:08:35.628 )") 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4032216 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:35.628 { 00:08:35.628 "params": { 00:08:35.628 "name": "Nvme$subsystem", 00:08:35.628 "trtype": "$TEST_TRANSPORT", 00:08:35.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.628 "adrfam": "ipv4", 00:08:35.628 "trsvcid": "$NVMF_PORT", 00:08:35.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.628 "hdgst": ${hdgst:-false}, 00:08:35.628 "ddgst": ${ddgst:-false} 00:08:35.628 }, 00:08:35.628 "method": "bdev_nvme_attach_controller" 00:08:35.628 } 00:08:35.628 EOF 00:08:35.628 )") 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4032219 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:35.628 { 00:08:35.628 "params": { 00:08:35.628 "name": "Nvme$subsystem", 00:08:35.628 "trtype": "$TEST_TRANSPORT", 00:08:35.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.628 "adrfam": "ipv4", 00:08:35.628 "trsvcid": "$NVMF_PORT", 00:08:35.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.628 "hdgst": ${hdgst:-false}, 00:08:35.628 "ddgst": ${ddgst:-false} 00:08:35.628 }, 00:08:35.628 "method": "bdev_nvme_attach_controller" 00:08:35.628 } 00:08:35.628 EOF 00:08:35.628 )") 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:35.628 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:35.628 { 00:08:35.628 "params": { 00:08:35.628 "name": "Nvme$subsystem", 00:08:35.628 "trtype": "$TEST_TRANSPORT", 00:08:35.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.628 "adrfam": "ipv4", 00:08:35.628 "trsvcid": "$NVMF_PORT", 00:08:35.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.628 "hdgst": ${hdgst:-false}, 00:08:35.628 "ddgst": ${ddgst:-false} 00:08:35.628 }, 00:08:35.628 "method": "bdev_nvme_attach_controller" 00:08:35.628 } 00:08:35.628 EOF 00:08:35.628 )") 00:08:35.629 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:35.629 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:35.629 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:35.629 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4032212 00:08:35.629 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:35.629 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:35.629 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:35.629 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:35.629 "params": { 00:08:35.629 "name": "Nvme1", 00:08:35.629 "trtype": "rdma", 00:08:35.629 "traddr": "192.168.100.8", 00:08:35.629 "adrfam": "ipv4", 00:08:35.629 "trsvcid": "4420", 00:08:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.629 "hdgst": false, 00:08:35.629 "ddgst": false 00:08:35.629 }, 00:08:35.629 "method": "bdev_nvme_attach_controller" 00:08:35.629 }' 00:08:35.629 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:35.629 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:35.629 "params": { 00:08:35.629 "name": "Nvme1", 00:08:35.629 "trtype": "rdma", 00:08:35.629 "traddr": "192.168.100.8", 00:08:35.629 "adrfam": "ipv4", 00:08:35.629 "trsvcid": "4420", 00:08:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.629 "hdgst": false, 00:08:35.629 "ddgst": false 00:08:35.629 }, 00:08:35.629 "method": "bdev_nvme_attach_controller" 00:08:35.629 }' 00:08:35.629 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:35.629 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:35.629 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:35.629 "params": { 00:08:35.629 "name": "Nvme1", 00:08:35.629 "trtype": "rdma", 00:08:35.629 "traddr": "192.168.100.8", 00:08:35.629 "adrfam": "ipv4", 00:08:35.629 "trsvcid": "4420", 00:08:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.629 "hdgst": false, 00:08:35.629 "ddgst": false 00:08:35.629 }, 00:08:35.629 "method": "bdev_nvme_attach_controller" 00:08:35.629 }' 00:08:35.629 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:35.629 12:47:01 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:35.629 "params": { 00:08:35.629 "name": "Nvme1", 00:08:35.629 "trtype": "rdma", 00:08:35.629 "traddr": "192.168.100.8", 00:08:35.629 "adrfam": "ipv4", 00:08:35.629 "trsvcid": "4420", 00:08:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.629 "hdgst": false, 00:08:35.629 "ddgst": false 00:08:35.629 }, 00:08:35.629 "method": "bdev_nvme_attach_controller" 00:08:35.629 }' 00:08:35.629 [2024-11-27 12:47:01.717437] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:35.629 [2024-11-27 12:47:01.717493] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:35.629 [2024-11-27 12:47:01.723544] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:35.629 [2024-11-27 12:47:01.723545] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:35.629 [2024-11-27 12:47:01.723587] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-27 12:47:01.723587] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:35.629 --proc-type=auto ] 00:08:35.629 [2024-11-27 12:47:01.723925] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:35.629 [2024-11-27 12:47:01.723974] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:35.629 [2024-11-27 12:47:01.928098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.629 [2024-11-27 12:47:01.968699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:35.887 [2024-11-27 12:47:02.020095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.887 [2024-11-27 12:47:02.061296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:35.887 [2024-11-27 12:47:02.142926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.887 [2024-11-27 12:47:02.193676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.887 [2024-11-27 12:47:02.197121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:35.887 [2024-11-27 12:47:02.234769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:36.144 Running I/O for 1 seconds... 00:08:36.144 Running I/O for 1 seconds... 00:08:36.144 Running I/O for 1 seconds... 00:08:36.144 Running I/O for 1 seconds... 00:08:37.078 18010.00 IOPS, 70.35 MiB/s 00:08:37.078 Latency(us) 00:08:37.078 [2024-11-27T11:47:03.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.078 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:37.078 Nvme1n1 : 1.01 18042.32 70.48 0.00 0.00 7071.76 4168.09 13316.92 00:08:37.078 [2024-11-27T11:47:03.463Z] =================================================================================================================== 00:08:37.078 [2024-11-27T11:47:03.463Z] Total : 18042.32 70.48 0.00 0.00 7071.76 4168.09 13316.92 00:08:37.078 256944.00 IOPS, 1003.69 MiB/s 00:08:37.078 Latency(us) 00:08:37.078 [2024-11-27T11:47:03.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.078 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:37.078 Nvme1n1 : 1.00 256549.47 1002.15 0.00 0.00 496.29 208.90 2044.72 00:08:37.078 [2024-11-27T11:47:03.463Z] =================================================================================================================== 00:08:37.078 [2024-11-27T11:47:03.463Z] Total : 256549.47 1002.15 0.00 0.00 496.29 208.90 2044.72 00:08:37.079 14405.00 IOPS, 56.27 MiB/s 00:08:37.079 Latency(us) 00:08:37.079 [2024-11-27T11:47:03.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.079 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:37.079 Nvme1n1 : 1.01 14460.06 56.48 0.00 0.00 8823.84 4718.59 16567.50 00:08:37.079 [2024-11-27T11:47:03.464Z] =================================================================================================================== 00:08:37.079 [2024-11-27T11:47:03.464Z] Total : 14460.06 56.48 0.00 0.00 8823.84 4718.59 16567.50 00:08:37.079 17642.00 IOPS, 68.91 MiB/s 00:08:37.079 Latency(us) 00:08:37.079 [2024-11-27T11:47:03.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.079 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:37.079 Nvme1n1 : 1.01 17738.98 69.29 0.00 0.00 7200.68 2726.30 16882.07 00:08:37.079 [2024-11-27T11:47:03.464Z] =================================================================================================================== 00:08:37.079 [2024-11-27T11:47:03.464Z] Total : 17738.98 69.29 0.00 0.00 7200.68 2726.30 16882.07 00:08:37.079 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4032214 00:08:37.337 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4032216 00:08:37.337 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4032219 00:08:37.337 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.337 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.337 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:37.338 rmmod nvme_rdma 00:08:37.338 rmmod nvme_fabrics 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 4031925 ']' 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 4031925 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 4031925 ']' 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 4031925 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4031925 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4031925' 00:08:37.338 killing process with pid 4031925 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 4031925 00:08:37.338 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 4031925 00:08:37.597 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:37.597 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:37.597 00:08:37.597 real 0m12.108s 00:08:37.597 user 0m20.886s 00:08:37.597 sys 0m7.844s 00:08:37.597 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.597 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.597 ************************************ 00:08:37.597 END TEST nvmf_bdev_io_wait 00:08:37.597 ************************************ 00:08:37.597 12:47:03 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:08:37.597 12:47:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:37.597 12:47:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.597 12:47:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.597 ************************************ 00:08:37.597 START TEST nvmf_queue_depth 00:08:37.597 ************************************ 00:08:37.597 12:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:08:37.858 * Looking for test storage... 00:08:37.858 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:37.858 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:37.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.859 --rc genhtml_branch_coverage=1 00:08:37.859 --rc genhtml_function_coverage=1 00:08:37.859 --rc genhtml_legend=1 00:08:37.859 --rc geninfo_all_blocks=1 00:08:37.859 --rc geninfo_unexecuted_blocks=1 00:08:37.859 00:08:37.859 ' 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:37.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.859 --rc genhtml_branch_coverage=1 00:08:37.859 --rc genhtml_function_coverage=1 00:08:37.859 --rc genhtml_legend=1 00:08:37.859 --rc geninfo_all_blocks=1 00:08:37.859 --rc geninfo_unexecuted_blocks=1 00:08:37.859 00:08:37.859 ' 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:37.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.859 --rc genhtml_branch_coverage=1 00:08:37.859 --rc genhtml_function_coverage=1 00:08:37.859 --rc genhtml_legend=1 00:08:37.859 --rc geninfo_all_blocks=1 00:08:37.859 --rc geninfo_unexecuted_blocks=1 00:08:37.859 00:08:37.859 ' 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:37.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.859 --rc genhtml_branch_coverage=1 00:08:37.859 --rc genhtml_function_coverage=1 00:08:37.859 --rc genhtml_legend=1 00:08:37.859 --rc geninfo_all_blocks=1 00:08:37.859 --rc geninfo_unexecuted_blocks=1 00:08:37.859 00:08:37.859 ' 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.859 12:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:47.835 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:47.835 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:47.835 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:47.835 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:47.835 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:47.836 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:47.836 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:47.836 altname enp217s0f0np0 00:08:47.836 altname ens818f0np0 00:08:47.836 inet 192.168.100.8/24 scope global mlx_0_0 00:08:47.836 valid_lft forever preferred_lft forever 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:47.836 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:47.836 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:47.836 altname enp217s0f1np1 00:08:47.836 altname ens818f1np1 00:08:47.836 inet 192.168.100.9/24 scope global mlx_0_1 00:08:47.836 valid_lft forever preferred_lft forever 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:47.836 192.168.100.9' 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:47.836 192.168.100.9' 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:47.836 192.168.100.9' 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=4036696 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 4036696 00:08:47.836 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4036696 ']' 00:08:47.837 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.837 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.837 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.837 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.837 12:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.837 [2024-11-27 12:47:12.874304] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:47.837 [2024-11-27 12:47:12.874361] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.837 [2024-11-27 12:47:12.966530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.837 [2024-11-27 12:47:13.003480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.837 [2024-11-27 12:47:13.003516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.837 [2024-11-27 12:47:13.003525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.837 [2024-11-27 12:47:13.003533] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.837 [2024-11-27 12:47:13.003540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.837 [2024-11-27 12:47:13.004130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.837 [2024-11-27 12:47:13.778938] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x195aea0/0x195f390) succeed. 00:08:47.837 [2024-11-27 12:47:13.787399] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x195c350/0x19a0a30) succeed. 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.837 Malloc0 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.837 [2024-11-27 12:47:13.881186] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4036976 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4036976 /var/tmp/bdevperf.sock 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4036976 ']' 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:47.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.837 12:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.837 [2024-11-27 12:47:13.933132] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:47.837 [2024-11-27 12:47:13.933176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4036976 ] 00:08:47.837 [2024-11-27 12:47:14.019221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.837 [2024-11-27 12:47:14.057878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.411 12:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.411 12:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:48.411 12:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:48.411 12:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.411 12:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.683 NVMe0n1 00:08:48.683 12:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.683 12:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:48.683 Running I/O for 10 seconds... 00:08:50.634 17408.00 IOPS, 68.00 MiB/s [2024-11-27T11:47:18.391Z] 17484.50 IOPS, 68.30 MiB/s [2024-11-27T11:47:19.329Z] 17749.33 IOPS, 69.33 MiB/s [2024-11-27T11:47:20.265Z] 17787.50 IOPS, 69.48 MiB/s [2024-11-27T11:47:21.198Z] 17817.60 IOPS, 69.60 MiB/s [2024-11-27T11:47:22.134Z] 17777.17 IOPS, 69.44 MiB/s [2024-11-27T11:47:23.069Z] 17846.86 IOPS, 69.71 MiB/s [2024-11-27T11:47:24.001Z] 17830.62 IOPS, 69.65 MiB/s [2024-11-27T11:47:25.375Z] 17863.11 IOPS, 69.78 MiB/s [2024-11-27T11:47:25.375Z] 17866.50 IOPS, 69.79 MiB/s 00:08:58.990 Latency(us) 00:08:58.990 [2024-11-27T11:47:25.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.990 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:58.990 Verification LBA range: start 0x0 length 0x4000 00:08:58.990 NVMe0n1 : 10.04 17898.00 69.91 0.00 0.00 57042.29 10328.47 35651.58 00:08:58.990 [2024-11-27T11:47:25.375Z] =================================================================================================================== 00:08:58.990 [2024-11-27T11:47:25.375Z] Total : 17898.00 69.91 0.00 0.00 57042.29 10328.47 35651.58 00:08:58.990 { 00:08:58.990 "results": [ 00:08:58.990 { 00:08:58.990 "job": "NVMe0n1", 00:08:58.990 "core_mask": "0x1", 00:08:58.990 "workload": "verify", 00:08:58.990 "status": "finished", 00:08:58.990 "verify_range": { 00:08:58.990 "start": 0, 00:08:58.990 "length": 16384 00:08:58.990 }, 00:08:58.990 "queue_depth": 1024, 00:08:58.990 "io_size": 4096, 00:08:58.990 "runtime": 10.037772, 00:08:58.990 "iops": 17897.995690677173, 00:08:58.990 "mibps": 69.91404566670771, 00:08:58.990 "io_failed": 0, 00:08:58.990 "io_timeout": 0, 00:08:58.990 "avg_latency_us": 57042.288971955284, 00:08:58.990 "min_latency_us": 10328.4736, 00:08:58.990 "max_latency_us": 35651.584 00:08:58.990 } 00:08:58.990 ], 00:08:58.990 "core_count": 1 00:08:58.990 } 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4036976 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4036976 ']' 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4036976 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4036976 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4036976' 00:08:58.990 killing process with pid 4036976 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4036976 00:08:58.990 Received shutdown signal, test time was about 10.000000 seconds 00:08:58.990 00:08:58.990 Latency(us) 00:08:58.990 [2024-11-27T11:47:25.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.990 [2024-11-27T11:47:25.375Z] =================================================================================================================== 00:08:58.990 [2024-11-27T11:47:25.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4036976 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:58.990 rmmod nvme_rdma 00:08:58.990 rmmod nvme_fabrics 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 4036696 ']' 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 4036696 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4036696 ']' 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4036696 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.990 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4036696 00:08:59.248 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:59.248 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:59.248 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4036696' 00:08:59.248 killing process with pid 4036696 00:08:59.248 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4036696 00:08:59.248 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4036696 00:08:59.248 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:59.248 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:59.248 00:08:59.248 real 0m21.630s 00:08:59.248 user 0m26.955s 00:08:59.248 sys 0m7.249s 00:08:59.248 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.248 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.248 ************************************ 00:08:59.248 END TEST nvmf_queue_depth 00:08:59.248 ************************************ 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.507 ************************************ 00:08:59.507 START TEST nvmf_target_multipath 00:08:59.507 ************************************ 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:08:59.507 * Looking for test storage... 00:08:59.507 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:59.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.507 --rc genhtml_branch_coverage=1 00:08:59.507 --rc genhtml_function_coverage=1 00:08:59.507 --rc genhtml_legend=1 00:08:59.507 --rc geninfo_all_blocks=1 00:08:59.507 --rc geninfo_unexecuted_blocks=1 00:08:59.507 00:08:59.507 ' 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:59.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.507 --rc genhtml_branch_coverage=1 00:08:59.507 --rc genhtml_function_coverage=1 00:08:59.507 --rc genhtml_legend=1 00:08:59.507 --rc geninfo_all_blocks=1 00:08:59.507 --rc geninfo_unexecuted_blocks=1 00:08:59.507 00:08:59.507 ' 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:59.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.507 --rc genhtml_branch_coverage=1 00:08:59.507 --rc genhtml_function_coverage=1 00:08:59.507 --rc genhtml_legend=1 00:08:59.507 --rc geninfo_all_blocks=1 00:08:59.507 --rc geninfo_unexecuted_blocks=1 00:08:59.507 00:08:59.507 ' 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:59.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.507 --rc genhtml_branch_coverage=1 00:08:59.507 --rc genhtml_function_coverage=1 00:08:59.507 --rc genhtml_legend=1 00:08:59.507 --rc geninfo_all_blocks=1 00:08:59.507 --rc geninfo_unexecuted_blocks=1 00:08:59.507 00:08:59.507 ' 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.507 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.508 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.508 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.508 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.508 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.508 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.508 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.508 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.508 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.765 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:59.765 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:59.765 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.765 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.765 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.765 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.765 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:59.765 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.765 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.765 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.765 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.765 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:59.766 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:59.766 12:47:25 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.749 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:09.750 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:09.750 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:09.750 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:09.750 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:09.750 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:09.750 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:09.750 altname enp217s0f0np0 00:09:09.750 altname ens818f0np0 00:09:09.750 inet 192.168.100.8/24 scope global mlx_0_0 00:09:09.750 valid_lft forever preferred_lft forever 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:09.750 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:09.750 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:09.751 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:09.751 altname enp217s0f1np1 00:09:09.751 altname ens818f1np1 00:09:09.751 inet 192.168.100.9/24 scope global mlx_0_1 00:09:09.751 valid_lft forever preferred_lft forever 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:09.751 192.168.100.9' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:09.751 192.168.100.9' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:09.751 192.168.100.9' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:09:09.751 run this test only with TCP transport for now 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:09.751 rmmod nvme_rdma 00:09:09.751 rmmod nvme_fabrics 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:09.751 00:09:09.751 real 0m9.163s 00:09:09.751 user 0m2.658s 00:09:09.751 sys 0m6.762s 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:09.751 ************************************ 00:09:09.751 END TEST nvmf_target_multipath 00:09:09.751 ************************************ 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.751 ************************************ 00:09:09.751 START TEST nvmf_zcopy 00:09:09.751 ************************************ 00:09:09.751 12:47:34 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:09.751 * Looking for test storage... 00:09:09.751 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:09.751 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:09.751 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:09.751 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:09.751 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:09.751 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.751 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.751 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.751 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.751 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.751 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.751 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.751 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:09.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.752 --rc genhtml_branch_coverage=1 00:09:09.752 --rc genhtml_function_coverage=1 00:09:09.752 --rc genhtml_legend=1 00:09:09.752 --rc geninfo_all_blocks=1 00:09:09.752 --rc geninfo_unexecuted_blocks=1 00:09:09.752 00:09:09.752 ' 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:09.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.752 --rc genhtml_branch_coverage=1 00:09:09.752 --rc genhtml_function_coverage=1 00:09:09.752 --rc genhtml_legend=1 00:09:09.752 --rc geninfo_all_blocks=1 00:09:09.752 --rc geninfo_unexecuted_blocks=1 00:09:09.752 00:09:09.752 ' 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:09.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.752 --rc genhtml_branch_coverage=1 00:09:09.752 --rc genhtml_function_coverage=1 00:09:09.752 --rc genhtml_legend=1 00:09:09.752 --rc geninfo_all_blocks=1 00:09:09.752 --rc geninfo_unexecuted_blocks=1 00:09:09.752 00:09:09.752 ' 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:09.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.752 --rc genhtml_branch_coverage=1 00:09:09.752 --rc genhtml_function_coverage=1 00:09:09.752 --rc genhtml_legend=1 00:09:09.752 --rc geninfo_all_blocks=1 00:09:09.752 --rc geninfo_unexecuted_blocks=1 00:09:09.752 00:09:09.752 ' 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.752 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:09.752 12:47:35 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:17.867 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:17.867 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:17.867 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:17.867 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:17.867 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:17.868 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:17.868 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:17.868 altname enp217s0f0np0 00:09:17.868 altname ens818f0np0 00:09:17.868 inet 192.168.100.8/24 scope global mlx_0_0 00:09:17.868 valid_lft forever preferred_lft forever 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:17.868 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:17.868 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:17.868 altname enp217s0f1np1 00:09:17.868 altname ens818f1np1 00:09:17.868 inet 192.168.100.9/24 scope global mlx_0_1 00:09:17.868 valid_lft forever preferred_lft forever 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:17.868 192.168.100.9' 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:17.868 192.168.100.9' 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:17.868 192.168.100.9' 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=4047220 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 4047220 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 4047220 ']' 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.868 [2024-11-27 12:47:43.654748] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:09:17.868 [2024-11-27 12:47:43.654798] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.868 [2024-11-27 12:47:43.742034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.868 [2024-11-27 12:47:43.780344] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.868 [2024-11-27 12:47:43.780383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.868 [2024-11-27 12:47:43.780393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.868 [2024-11-27 12:47:43.780401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.868 [2024-11-27 12:47:43.780408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.868 [2024-11-27 12:47:43.781023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:09:17.868 Unsupported transport: rdma 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:17.868 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:17.869 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:17.869 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:17.869 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:17.869 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:17.869 nvmf_trace.0 00:09:17.869 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0 00:09:17.869 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:09:17.869 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:17.869 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:17.869 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:17.869 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:17.869 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:17.869 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.869 12:47:43 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:17.869 rmmod nvme_rdma 00:09:17.869 rmmod nvme_fabrics 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 4047220 ']' 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 4047220 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 4047220 ']' 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 4047220 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4047220 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4047220' 00:09:17.869 killing process with pid 4047220 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 4047220 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 4047220 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:17.869 00:09:17.869 real 0m9.313s 00:09:17.869 user 0m3.126s 00:09:17.869 sys 0m6.834s 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.869 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.869 ************************************ 00:09:17.869 END TEST nvmf_zcopy 00:09:17.869 ************************************ 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:18.128 ************************************ 00:09:18.128 START TEST nvmf_nmic 00:09:18.128 ************************************ 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:09:18.128 * Looking for test storage... 00:09:18.128 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:18.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.128 --rc genhtml_branch_coverage=1 00:09:18.128 --rc genhtml_function_coverage=1 00:09:18.128 --rc genhtml_legend=1 00:09:18.128 --rc geninfo_all_blocks=1 00:09:18.128 --rc geninfo_unexecuted_blocks=1 00:09:18.128 00:09:18.128 ' 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:18.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.128 --rc genhtml_branch_coverage=1 00:09:18.128 --rc genhtml_function_coverage=1 00:09:18.128 --rc genhtml_legend=1 00:09:18.128 --rc geninfo_all_blocks=1 00:09:18.128 --rc geninfo_unexecuted_blocks=1 00:09:18.128 00:09:18.128 ' 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:18.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.128 --rc genhtml_branch_coverage=1 00:09:18.128 --rc genhtml_function_coverage=1 00:09:18.128 --rc genhtml_legend=1 00:09:18.128 --rc geninfo_all_blocks=1 00:09:18.128 --rc geninfo_unexecuted_blocks=1 00:09:18.128 00:09:18.128 ' 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:18.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.128 --rc genhtml_branch_coverage=1 00:09:18.128 --rc genhtml_function_coverage=1 00:09:18.128 --rc genhtml_legend=1 00:09:18.128 --rc geninfo_all_blocks=1 00:09:18.128 --rc geninfo_unexecuted_blocks=1 00:09:18.128 00:09:18.128 ' 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.128 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:18.388 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:18.388 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:18.389 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.389 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.389 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.389 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:18.389 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:18.389 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:18.389 12:47:44 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:26.509 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:26.509 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:26.509 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:26.509 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:09:26.509 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:26.510 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:26.510 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:26.510 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:26.510 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:26.510 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:26.510 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:26.510 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:26.510 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:26.510 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:26.510 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:26.510 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:26.510 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:26.510 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:26.510 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:26.769 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:26.770 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:26.770 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:26.770 altname enp217s0f0np0 00:09:26.770 altname ens818f0np0 00:09:26.770 inet 192.168.100.8/24 scope global mlx_0_0 00:09:26.770 valid_lft forever preferred_lft forever 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:26.770 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:26.770 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:26.770 altname enp217s0f1np1 00:09:26.770 altname ens818f1np1 00:09:26.770 inet 192.168.100.9/24 scope global mlx_0_1 00:09:26.770 valid_lft forever preferred_lft forever 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:26.770 12:47:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:26.770 192.168.100.9' 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:26.770 192.168.100.9' 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:26.770 192.168.100.9' 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=4051421 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 4051421 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 4051421 ']' 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.770 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.770 [2024-11-27 12:47:53.125907] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:09:26.771 [2024-11-27 12:47:53.125961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.029 [2024-11-27 12:47:53.214690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.029 [2024-11-27 12:47:53.254569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.029 [2024-11-27 12:47:53.254614] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.029 [2024-11-27 12:47:53.254623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.029 [2024-11-27 12:47:53.254632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.029 [2024-11-27 12:47:53.254638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.029 [2024-11-27 12:47:53.256228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.029 [2024-11-27 12:47:53.256326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.029 [2024-11-27 12:47:53.256389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.029 [2024-11-27 12:47:53.256391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.597 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.597 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:27.597 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:27.597 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.597 12:47:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.856 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.856 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:27.856 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.856 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.856 [2024-11-27 12:47:54.042692] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xaecdf0/0xaf12e0) succeed. 00:09:27.856 [2024-11-27 12:47:54.052057] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xaee480/0xb32980) succeed. 00:09:27.856 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.856 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:27.856 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.856 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.856 Malloc0 00:09:27.856 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.856 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:27.856 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.856 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.856 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.856 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:27.857 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.857 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.857 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.857 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:27.857 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.857 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.857 [2024-11-27 12:47:54.230839] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:27.857 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.857 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:27.857 test case1: single bdev can't be used in multiple subsystems 00:09:27.857 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:27.857 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.857 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.115 [2024-11-27 12:47:54.258632] bdev.c:8507:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:28.115 [2024-11-27 12:47:54.258656] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:28.115 [2024-11-27 12:47:54.258666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.115 request: 00:09:28.115 { 00:09:28.115 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:28.115 "namespace": { 00:09:28.115 "bdev_name": "Malloc0", 00:09:28.115 "no_auto_visible": false, 00:09:28.115 "hide_metadata": false 00:09:28.115 }, 00:09:28.115 "method": "nvmf_subsystem_add_ns", 00:09:28.115 "req_id": 1 00:09:28.115 } 00:09:28.115 Got JSON-RPC error response 00:09:28.115 response: 00:09:28.115 { 00:09:28.115 "code": -32602, 00:09:28.115 "message": "Invalid parameters" 00:09:28.115 } 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:28.115 Adding namespace failed - expected result. 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:28.115 test case2: host connect to nvmf target in multiple paths 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.115 [2024-11-27 12:47:54.274681] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.115 12:47:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:29.049 12:47:55 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:09:29.983 12:47:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:29.983 12:47:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:29.983 12:47:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:29.983 12:47:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:29.983 12:47:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:32.513 12:47:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:32.513 12:47:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:32.513 12:47:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.513 12:47:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:32.513 12:47:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.513 12:47:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:32.513 12:47:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:32.513 [global] 00:09:32.513 thread=1 00:09:32.513 invalidate=1 00:09:32.513 rw=write 00:09:32.513 time_based=1 00:09:32.513 runtime=1 00:09:32.513 ioengine=libaio 00:09:32.513 direct=1 00:09:32.513 bs=4096 00:09:32.513 iodepth=1 00:09:32.513 norandommap=0 00:09:32.513 numjobs=1 00:09:32.513 00:09:32.513 verify_dump=1 00:09:32.513 verify_backlog=512 00:09:32.513 verify_state_save=0 00:09:32.513 do_verify=1 00:09:32.513 verify=crc32c-intel 00:09:32.513 [job0] 00:09:32.513 filename=/dev/nvme0n1 00:09:32.513 Could not set queue depth (nvme0n1) 00:09:32.513 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.513 fio-3.35 00:09:32.513 Starting 1 thread 00:09:33.446 00:09:33.446 job0: (groupid=0, jobs=1): err= 0: pid=4052612: Wed Nov 27 12:47:59 2024 00:09:33.446 read: IOPS=7049, BW=27.5MiB/s (28.9MB/s)(27.6MiB/1001msec) 00:09:33.446 slat (nsec): min=8328, max=19880, avg=8825.52, stdev=803.44 00:09:33.446 clat (nsec): min=46735, max=83675, avg=58503.86, stdev=3280.03 00:09:33.446 lat (nsec): min=59040, max=92381, avg=67329.39, stdev=3376.15 00:09:33.446 clat percentiles (nsec): 00:09:33.446 | 1.00th=[52480], 5.00th=[53504], 10.00th=[54528], 20.00th=[55552], 00:09:33.446 | 30.00th=[56576], 40.00th=[57600], 50.00th=[58112], 60.00th=[59136], 00:09:33.446 | 70.00th=[60160], 80.00th=[61184], 90.00th=[62720], 95.00th=[64256], 00:09:33.446 | 99.00th=[67072], 99.50th=[69120], 99.90th=[72192], 99.95th=[75264], 00:09:33.446 | 99.99th=[83456] 00:09:33.446 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:09:33.446 slat (nsec): min=10715, max=46750, avg=11415.89, stdev=1164.80 00:09:33.446 clat (nsec): min=38711, max=94526, avg=56515.08, stdev=3351.52 00:09:33.446 lat (usec): min=59, max=141, avg=67.93, stdev= 3.53 00:09:33.446 clat percentiles (nsec): 00:09:33.446 | 1.00th=[49920], 5.00th=[51456], 10.00th=[52480], 20.00th=[53504], 00:09:33.446 | 30.00th=[54528], 40.00th=[55552], 50.00th=[56064], 60.00th=[57088], 00:09:33.446 | 70.00th=[58112], 80.00th=[59136], 90.00th=[61184], 95.00th=[62208], 00:09:33.446 | 99.00th=[64768], 99.50th=[66048], 99.90th=[70144], 99.95th=[76288], 00:09:33.446 | 99.99th=[94720] 00:09:33.446 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:09:33.446 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:09:33.446 lat (usec) : 50=0.41%, 100=99.59% 00:09:33.446 cpu : usr=12.90%, sys=17.20%, ctx=14225, majf=0, minf=1 00:09:33.446 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.446 issued rwts: total=7057,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.446 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.446 00:09:33.446 Run status group 0 (all jobs): 00:09:33.446 READ: bw=27.5MiB/s (28.9MB/s), 27.5MiB/s-27.5MiB/s (28.9MB/s-28.9MB/s), io=27.6MiB (28.9MB), run=1001-1001msec 00:09:33.446 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:09:33.446 00:09:33.446 Disk stats (read/write): 00:09:33.446 nvme0n1: ios=6193/6642, merge=0/0, ticks=275/313, in_queue=588, util=90.58% 00:09:33.446 12:47:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:35.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:35.347 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:35.347 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:35.347 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:35.347 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.347 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:35.347 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.347 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:35.347 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:35.347 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:35.347 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:35.347 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:35.347 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:35.347 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:35.347 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:35.347 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.347 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:35.347 rmmod nvme_rdma 00:09:35.605 rmmod nvme_fabrics 00:09:35.605 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.605 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:35.605 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:35.605 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 4051421 ']' 00:09:35.605 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 4051421 00:09:35.605 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 4051421 ']' 00:09:35.605 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 4051421 00:09:35.605 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:35.605 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.605 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4051421 00:09:35.605 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.605 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.605 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4051421' 00:09:35.605 killing process with pid 4051421 00:09:35.605 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 4051421 00:09:35.605 12:48:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 4051421 00:09:35.864 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:35.864 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:35.864 00:09:35.864 real 0m17.770s 00:09:35.864 user 0m45.553s 00:09:35.864 sys 0m7.555s 00:09:35.864 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.864 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.864 ************************************ 00:09:35.864 END TEST nvmf_nmic 00:09:35.864 ************************************ 00:09:35.864 12:48:02 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:09:35.864 12:48:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:35.864 12:48:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.864 12:48:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.864 ************************************ 00:09:35.864 START TEST nvmf_fio_target 00:09:35.864 ************************************ 00:09:35.864 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:09:36.124 * Looking for test storage... 00:09:36.124 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:36.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.124 --rc genhtml_branch_coverage=1 00:09:36.124 --rc genhtml_function_coverage=1 00:09:36.124 --rc genhtml_legend=1 00:09:36.124 --rc geninfo_all_blocks=1 00:09:36.124 --rc geninfo_unexecuted_blocks=1 00:09:36.124 00:09:36.124 ' 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:36.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.124 --rc genhtml_branch_coverage=1 00:09:36.124 --rc genhtml_function_coverage=1 00:09:36.124 --rc genhtml_legend=1 00:09:36.124 --rc geninfo_all_blocks=1 00:09:36.124 --rc geninfo_unexecuted_blocks=1 00:09:36.124 00:09:36.124 ' 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:36.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.124 --rc genhtml_branch_coverage=1 00:09:36.124 --rc genhtml_function_coverage=1 00:09:36.124 --rc genhtml_legend=1 00:09:36.124 --rc geninfo_all_blocks=1 00:09:36.124 --rc geninfo_unexecuted_blocks=1 00:09:36.124 00:09:36.124 ' 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:36.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.124 --rc genhtml_branch_coverage=1 00:09:36.124 --rc genhtml_function_coverage=1 00:09:36.124 --rc genhtml_legend=1 00:09:36.124 --rc geninfo_all_blocks=1 00:09:36.124 --rc geninfo_unexecuted_blocks=1 00:09:36.124 00:09:36.124 ' 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.124 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.125 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:36.125 12:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:44.278 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:44.278 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:44.278 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:44.278 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:44.278 12:48:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:44.278 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:44.278 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:44.278 altname enp217s0f0np0 00:09:44.278 altname ens818f0np0 00:09:44.278 inet 192.168.100.8/24 scope global mlx_0_0 00:09:44.278 valid_lft forever preferred_lft forever 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:44.278 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:44.278 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:44.278 altname enp217s0f1np1 00:09:44.278 altname ens818f1np1 00:09:44.278 inet 192.168.100.9/24 scope global mlx_0_1 00:09:44.278 valid_lft forever preferred_lft forever 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:44.278 192.168.100.9' 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:44.278 192.168.100.9' 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:44.278 192.168.100.9' 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=4057667 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 4057667 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 4057667 ']' 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.278 12:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:44.278 [2024-11-27 12:48:10.237699] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:09:44.278 [2024-11-27 12:48:10.237751] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.278 [2024-11-27 12:48:10.329209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:44.279 [2024-11-27 12:48:10.370019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.279 [2024-11-27 12:48:10.370059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.279 [2024-11-27 12:48:10.370067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.279 [2024-11-27 12:48:10.370076] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.279 [2024-11-27 12:48:10.370082] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.279 [2024-11-27 12:48:10.371847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.279 [2024-11-27 12:48:10.371942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.279 [2024-11-27 12:48:10.372028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.279 [2024-11-27 12:48:10.372031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.842 12:48:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.842 12:48:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:44.842 12:48:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:44.842 12:48:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.842 12:48:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.842 12:48:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.842 12:48:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:45.099 [2024-11-27 12:48:11.331771] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c9edf0/0x1ca32e0) succeed. 00:09:45.100 [2024-11-27 12:48:11.340813] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ca0480/0x1ce4980) succeed. 00:09:45.357 12:48:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.357 12:48:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:45.357 12:48:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.614 12:48:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:45.614 12:48:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.872 12:48:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:45.872 12:48:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.129 12:48:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:46.129 12:48:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:46.387 12:48:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.387 12:48:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:46.387 12:48:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.647 12:48:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:46.647 12:48:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.905 12:48:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:46.905 12:48:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:47.164 12:48:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:47.423 12:48:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:47.423 12:48:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:47.423 12:48:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:47.423 12:48:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:47.681 12:48:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:47.940 [2024-11-27 12:48:14.170642] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:47.940 12:48:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:48.199 12:48:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:48.457 12:48:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:49.392 12:48:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:49.392 12:48:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:49.392 12:48:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:49.392 12:48:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:49.392 12:48:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:49.392 12:48:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:51.293 12:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:51.293 12:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:51.293 12:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:51.293 12:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:51.293 12:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:51.293 12:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:51.293 12:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:51.293 [global] 00:09:51.293 thread=1 00:09:51.293 invalidate=1 00:09:51.293 rw=write 00:09:51.293 time_based=1 00:09:51.293 runtime=1 00:09:51.293 ioengine=libaio 00:09:51.293 direct=1 00:09:51.293 bs=4096 00:09:51.293 iodepth=1 00:09:51.293 norandommap=0 00:09:51.293 numjobs=1 00:09:51.293 00:09:51.293 verify_dump=1 00:09:51.293 verify_backlog=512 00:09:51.293 verify_state_save=0 00:09:51.293 do_verify=1 00:09:51.293 verify=crc32c-intel 00:09:51.293 [job0] 00:09:51.293 filename=/dev/nvme0n1 00:09:51.293 [job1] 00:09:51.293 filename=/dev/nvme0n2 00:09:51.293 [job2] 00:09:51.293 filename=/dev/nvme0n3 00:09:51.293 [job3] 00:09:51.293 filename=/dev/nvme0n4 00:09:51.578 Could not set queue depth (nvme0n1) 00:09:51.578 Could not set queue depth (nvme0n2) 00:09:51.578 Could not set queue depth (nvme0n3) 00:09:51.578 Could not set queue depth (nvme0n4) 00:09:51.842 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.842 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.842 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.842 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.842 fio-3.35 00:09:51.842 Starting 4 threads 00:09:53.233 00:09:53.233 job0: (groupid=0, jobs=1): err= 0: pid=4059211: Wed Nov 27 12:48:19 2024 00:09:53.233 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:09:53.233 slat (nsec): min=8308, max=26347, avg=9103.47, stdev=931.58 00:09:53.233 clat (usec): min=75, max=181, avg=123.80, stdev= 8.68 00:09:53.233 lat (usec): min=84, max=190, avg=132.90, stdev= 8.63 00:09:53.233 clat percentiles (usec): 00:09:53.233 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 118], 00:09:53.233 | 30.00th=[ 120], 40.00th=[ 122], 50.00th=[ 124], 60.00th=[ 126], 00:09:53.233 | 70.00th=[ 128], 80.00th=[ 130], 90.00th=[ 135], 95.00th=[ 139], 00:09:53.233 | 99.00th=[ 147], 99.50th=[ 151], 99.90th=[ 172], 99.95th=[ 182], 00:09:53.233 | 99.99th=[ 182] 00:09:53.233 write: IOPS=3956, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1001msec); 0 zone resets 00:09:53.233 slat (nsec): min=6458, max=48853, avg=11243.32, stdev=1216.29 00:09:53.233 clat (usec): min=61, max=174, avg=116.44, stdev= 9.15 00:09:53.233 lat (usec): min=69, max=185, avg=127.68, stdev= 9.21 00:09:53.233 clat percentiles (usec): 00:09:53.233 | 1.00th=[ 94], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 111], 00:09:53.233 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 119], 00:09:53.233 | 70.00th=[ 121], 80.00th=[ 124], 90.00th=[ 128], 95.00th=[ 133], 00:09:53.233 | 99.00th=[ 139], 99.50th=[ 147], 99.90th=[ 165], 99.95th=[ 174], 00:09:53.233 | 99.99th=[ 174] 00:09:53.233 bw ( KiB/s): min=16384, max=16384, per=25.47%, avg=16384.00, stdev= 0.00, samples=1 00:09:53.233 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:53.233 lat (usec) : 100=1.41%, 250=98.59% 00:09:53.233 cpu : usr=6.50%, sys=9.50%, ctx=7545, majf=0, minf=1 00:09:53.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.233 issued rwts: total=3584,3960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.233 job1: (groupid=0, jobs=1): err= 0: pid=4059212: Wed Nov 27 12:48:19 2024 00:09:53.233 read: IOPS=3739, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1001msec) 00:09:53.233 slat (nsec): min=8311, max=42307, avg=8921.28, stdev=1177.30 00:09:53.233 clat (usec): min=76, max=172, avg=119.47, stdev= 7.55 00:09:53.233 lat (usec): min=85, max=181, avg=128.39, stdev= 7.48 00:09:53.233 clat percentiles (usec): 00:09:53.233 | 1.00th=[ 100], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 115], 00:09:53.233 | 30.00th=[ 117], 40.00th=[ 119], 50.00th=[ 121], 60.00th=[ 122], 00:09:53.233 | 70.00th=[ 124], 80.00th=[ 126], 90.00th=[ 129], 95.00th=[ 131], 00:09:53.233 | 99.00th=[ 137], 99.50th=[ 141], 99.90th=[ 149], 99.95th=[ 161], 00:09:53.233 | 99.99th=[ 174] 00:09:53.233 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:09:53.233 slat (nsec): min=10338, max=47067, avg=11377.39, stdev=1256.52 00:09:53.233 clat (usec): min=67, max=158, avg=110.67, stdev= 7.23 00:09:53.233 lat (usec): min=84, max=169, avg=122.04, stdev= 7.25 00:09:53.233 clat percentiles (usec): 00:09:53.233 | 1.00th=[ 92], 5.00th=[ 98], 10.00th=[ 102], 20.00th=[ 106], 00:09:53.233 | 30.00th=[ 109], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 113], 00:09:53.233 | 70.00th=[ 115], 80.00th=[ 117], 90.00th=[ 119], 95.00th=[ 122], 00:09:53.233 | 99.00th=[ 127], 99.50th=[ 131], 99.90th=[ 153], 99.95th=[ 155], 00:09:53.233 | 99.99th=[ 159] 00:09:53.233 bw ( KiB/s): min=16384, max=16384, per=25.47%, avg=16384.00, stdev= 0.00, samples=1 00:09:53.233 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:53.233 lat (usec) : 100=4.26%, 250=95.74% 00:09:53.233 cpu : usr=5.90%, sys=10.80%, ctx=7840, majf=0, minf=1 00:09:53.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.233 issued rwts: total=3743,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.233 job2: (groupid=0, jobs=1): err= 0: pid=4059213: Wed Nov 27 12:48:19 2024 00:09:53.233 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:09:53.233 slat (nsec): min=8973, max=35687, avg=13454.44, stdev=4846.04 00:09:53.233 clat (usec): min=91, max=164, avg=119.41, stdev= 8.30 00:09:53.233 lat (usec): min=101, max=174, avg=132.86, stdev= 6.99 00:09:53.233 clat percentiles (usec): 00:09:53.233 | 1.00th=[ 101], 5.00th=[ 106], 10.00th=[ 109], 20.00th=[ 113], 00:09:53.233 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 122], 00:09:53.233 | 70.00th=[ 124], 80.00th=[ 126], 90.00th=[ 130], 95.00th=[ 133], 00:09:53.233 | 99.00th=[ 141], 99.50th=[ 147], 99.90th=[ 159], 99.95th=[ 165], 00:09:53.233 | 99.99th=[ 165] 00:09:53.233 write: IOPS=3944, BW=15.4MiB/s (16.2MB/s)(15.4MiB/1001msec); 0 zone resets 00:09:53.233 slat (nsec): min=8805, max=51297, avg=17251.61, stdev=4913.87 00:09:53.233 clat (usec): min=78, max=187, avg=110.47, stdev= 7.93 00:09:53.233 lat (usec): min=90, max=196, avg=127.72, stdev= 6.77 00:09:53.233 clat percentiles (usec): 00:09:53.233 | 1.00th=[ 93], 5.00th=[ 98], 10.00th=[ 101], 20.00th=[ 104], 00:09:53.233 | 30.00th=[ 106], 40.00th=[ 110], 50.00th=[ 111], 60.00th=[ 113], 00:09:53.233 | 70.00th=[ 115], 80.00th=[ 117], 90.00th=[ 120], 95.00th=[ 123], 00:09:53.233 | 99.00th=[ 131], 99.50th=[ 139], 99.90th=[ 149], 99.95th=[ 149], 00:09:53.233 | 99.99th=[ 188] 00:09:53.233 bw ( KiB/s): min=16384, max=16384, per=25.47%, avg=16384.00, stdev= 0.00, samples=1 00:09:53.233 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:53.233 lat (usec) : 100=5.03%, 250=94.97% 00:09:53.233 cpu : usr=5.20%, sys=12.70%, ctx=7533, majf=0, minf=1 00:09:53.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.233 issued rwts: total=3584,3948,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.234 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.234 job3: (groupid=0, jobs=1): err= 0: pid=4059214: Wed Nov 27 12:48:19 2024 00:09:53.234 read: IOPS=3739, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1001msec) 00:09:53.234 slat (nsec): min=8528, max=35114, avg=9081.41, stdev=1014.85 00:09:53.234 clat (usec): min=81, max=160, avg=119.31, stdev= 6.90 00:09:53.234 lat (usec): min=90, max=169, avg=128.39, stdev= 6.88 00:09:53.234 clat percentiles (usec): 00:09:53.234 | 1.00th=[ 103], 5.00th=[ 109], 10.00th=[ 111], 20.00th=[ 115], 00:09:53.234 | 30.00th=[ 117], 40.00th=[ 119], 50.00th=[ 120], 60.00th=[ 122], 00:09:53.234 | 70.00th=[ 123], 80.00th=[ 125], 90.00th=[ 128], 95.00th=[ 130], 00:09:53.234 | 99.00th=[ 137], 99.50th=[ 139], 99.90th=[ 149], 99.95th=[ 151], 00:09:53.234 | 99.99th=[ 161] 00:09:53.234 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:09:53.234 slat (nsec): min=10466, max=39267, avg=11570.51, stdev=1123.33 00:09:53.234 clat (usec): min=70, max=151, avg=110.46, stdev= 6.55 00:09:53.234 lat (usec): min=81, max=163, avg=122.03, stdev= 6.56 00:09:53.234 clat percentiles (usec): 00:09:53.234 | 1.00th=[ 94], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 106], 00:09:53.234 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 111], 60.00th=[ 113], 00:09:53.234 | 70.00th=[ 114], 80.00th=[ 116], 90.00th=[ 119], 95.00th=[ 121], 00:09:53.234 | 99.00th=[ 126], 99.50th=[ 131], 99.90th=[ 145], 99.95th=[ 145], 00:09:53.234 | 99.99th=[ 153] 00:09:53.234 bw ( KiB/s): min=16384, max=16384, per=25.47%, avg=16384.00, stdev= 0.00, samples=1 00:09:53.234 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:53.234 lat (usec) : 100=2.70%, 250=97.30% 00:09:53.234 cpu : usr=5.60%, sys=11.10%, ctx=7839, majf=0, minf=1 00:09:53.234 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.234 issued rwts: total=3743,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.234 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.234 00:09:53.234 Run status group 0 (all jobs): 00:09:53.234 READ: bw=57.2MiB/s (60.0MB/s), 14.0MiB/s-14.6MiB/s (14.7MB/s-15.3MB/s), io=57.2MiB (60.0MB), run=1001-1001msec 00:09:53.234 WRITE: bw=62.8MiB/s (65.9MB/s), 15.4MiB/s-16.0MiB/s (16.2MB/s-16.8MB/s), io=62.9MiB (65.9MB), run=1001-1001msec 00:09:53.234 00:09:53.234 Disk stats (read/write): 00:09:53.234 nvme0n1: ios=3121/3224, merge=0/0, ticks=368/341, in_queue=709, util=84.55% 00:09:53.234 nvme0n2: ios=3072/3474, merge=0/0, ticks=339/356, in_queue=695, util=85.60% 00:09:53.234 nvme0n3: ios=3072/3213, merge=0/0, ticks=345/334, in_queue=679, util=88.58% 00:09:53.234 nvme0n4: ios=3072/3474, merge=0/0, ticks=344/361, in_queue=705, util=89.53% 00:09:53.234 12:48:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:53.234 [global] 00:09:53.234 thread=1 00:09:53.234 invalidate=1 00:09:53.234 rw=randwrite 00:09:53.234 time_based=1 00:09:53.234 runtime=1 00:09:53.234 ioengine=libaio 00:09:53.234 direct=1 00:09:53.234 bs=4096 00:09:53.234 iodepth=1 00:09:53.234 norandommap=0 00:09:53.234 numjobs=1 00:09:53.234 00:09:53.234 verify_dump=1 00:09:53.234 verify_backlog=512 00:09:53.234 verify_state_save=0 00:09:53.234 do_verify=1 00:09:53.234 verify=crc32c-intel 00:09:53.234 [job0] 00:09:53.234 filename=/dev/nvme0n1 00:09:53.234 [job1] 00:09:53.234 filename=/dev/nvme0n2 00:09:53.234 [job2] 00:09:53.234 filename=/dev/nvme0n3 00:09:53.234 [job3] 00:09:53.234 filename=/dev/nvme0n4 00:09:53.234 Could not set queue depth (nvme0n1) 00:09:53.234 Could not set queue depth (nvme0n2) 00:09:53.234 Could not set queue depth (nvme0n3) 00:09:53.234 Could not set queue depth (nvme0n4) 00:09:53.493 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.493 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.493 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.493 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.493 fio-3.35 00:09:53.493 Starting 4 threads 00:09:54.870 00:09:54.870 job0: (groupid=0, jobs=1): err= 0: pid=4059643: Wed Nov 27 12:48:20 2024 00:09:54.870 read: IOPS=3159, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1001msec) 00:09:54.870 slat (nsec): min=8221, max=28099, avg=9200.78, stdev=1064.58 00:09:54.870 clat (usec): min=71, max=329, avg=140.46, stdev=22.20 00:09:54.870 lat (usec): min=80, max=341, avg=149.66, stdev=22.35 00:09:54.870 clat percentiles (usec): 00:09:54.870 | 1.00th=[ 87], 5.00th=[ 114], 10.00th=[ 119], 20.00th=[ 123], 00:09:54.870 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 139], 60.00th=[ 149], 00:09:54.870 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 174], 00:09:54.870 | 99.00th=[ 204], 99.50th=[ 215], 99.90th=[ 260], 99.95th=[ 322], 00:09:54.870 | 99.99th=[ 330] 00:09:54.870 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:54.870 slat (nsec): min=10216, max=67380, avg=11065.02, stdev=1345.05 00:09:54.870 clat (usec): min=66, max=665, avg=131.50, stdev=23.17 00:09:54.870 lat (usec): min=77, max=680, avg=142.56, stdev=23.27 00:09:54.870 clat percentiles (usec): 00:09:54.870 | 1.00th=[ 83], 5.00th=[ 103], 10.00th=[ 110], 20.00th=[ 114], 00:09:54.870 | 30.00th=[ 117], 40.00th=[ 122], 50.00th=[ 133], 60.00th=[ 141], 00:09:54.870 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 165], 00:09:54.870 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 212], 99.95th=[ 338], 00:09:54.870 | 99.99th=[ 668] 00:09:54.870 bw ( KiB/s): min=12432, max=12432, per=19.60%, avg=12432.00, stdev= 0.00, samples=1 00:09:54.870 iops : min= 3108, max= 3108, avg=3108.00, stdev= 0.00, samples=1 00:09:54.870 lat (usec) : 100=3.05%, 250=96.84%, 500=0.09%, 750=0.01% 00:09:54.870 cpu : usr=5.20%, sys=9.10%, ctx=6748, majf=0, minf=1 00:09:54.870 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.870 issued rwts: total=3163,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.870 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.870 job1: (groupid=0, jobs=1): err= 0: pid=4059644: Wed Nov 27 12:48:20 2024 00:09:54.870 read: IOPS=4688, BW=18.3MiB/s (19.2MB/s)(18.3MiB/1001msec) 00:09:54.870 slat (nsec): min=8305, max=19556, avg=8808.91, stdev=818.68 00:09:54.870 clat (usec): min=63, max=179, avg=92.07, stdev=22.81 00:09:54.870 lat (usec): min=72, max=189, avg=100.88, stdev=22.96 00:09:54.870 clat percentiles (usec): 00:09:54.870 | 1.00th=[ 70], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 76], 00:09:54.870 | 30.00th=[ 77], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 83], 00:09:54.870 | 70.00th=[ 106], 80.00th=[ 124], 90.00th=[ 129], 95.00th=[ 133], 00:09:54.870 | 99.00th=[ 141], 99.50th=[ 143], 99.90th=[ 149], 99.95th=[ 155], 00:09:54.870 | 99.99th=[ 180] 00:09:54.870 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:09:54.870 slat (nsec): min=10348, max=46504, avg=11162.47, stdev=1296.79 00:09:54.870 clat (usec): min=57, max=181, avg=86.78, stdev=19.89 00:09:54.870 lat (usec): min=70, max=192, avg=97.95, stdev=19.90 00:09:54.870 clat percentiles (usec): 00:09:54.870 | 1.00th=[ 66], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 72], 00:09:54.870 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 77], 60.00th=[ 80], 00:09:54.870 | 70.00th=[ 96], 80.00th=[ 114], 90.00th=[ 119], 95.00th=[ 123], 00:09:54.870 | 99.00th=[ 129], 99.50th=[ 133], 99.90th=[ 155], 99.95th=[ 167], 00:09:54.870 | 99.99th=[ 182] 00:09:54.870 bw ( KiB/s): min=24576, max=24576, per=38.75%, avg=24576.00, stdev= 0.00, samples=1 00:09:54.870 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:09:54.870 lat (usec) : 100=70.03%, 250=29.97% 00:09:54.870 cpu : usr=8.40%, sys=12.30%, ctx=9813, majf=0, minf=1 00:09:54.870 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.870 issued rwts: total=4693,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.870 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.870 job2: (groupid=0, jobs=1): err= 0: pid=4059645: Wed Nov 27 12:48:20 2024 00:09:54.870 read: IOPS=3120, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec) 00:09:54.870 slat (nsec): min=8369, max=27551, avg=9627.37, stdev=1312.75 00:09:54.870 clat (usec): min=76, max=315, avg=140.59, stdev=21.77 00:09:54.870 lat (usec): min=85, max=325, avg=150.21, stdev=21.56 00:09:54.870 clat percentiles (usec): 00:09:54.870 | 1.00th=[ 91], 5.00th=[ 115], 10.00th=[ 119], 20.00th=[ 123], 00:09:54.870 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 141], 60.00th=[ 149], 00:09:54.870 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 174], 00:09:54.870 | 99.00th=[ 202], 99.50th=[ 212], 99.90th=[ 249], 99.95th=[ 306], 00:09:54.870 | 99.99th=[ 318] 00:09:54.870 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:54.870 slat (nsec): min=10333, max=40374, avg=11790.72, stdev=1687.71 00:09:54.870 clat (usec): min=71, max=732, avg=131.70, stdev=23.18 00:09:54.870 lat (usec): min=82, max=743, avg=143.50, stdev=23.00 00:09:54.870 clat percentiles (usec): 00:09:54.870 | 1.00th=[ 86], 5.00th=[ 104], 10.00th=[ 110], 20.00th=[ 114], 00:09:54.870 | 30.00th=[ 117], 40.00th=[ 122], 50.00th=[ 135], 60.00th=[ 141], 00:09:54.870 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 165], 00:09:54.870 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 217], 99.95th=[ 338], 00:09:54.870 | 99.99th=[ 734] 00:09:54.870 bw ( KiB/s): min=12384, max=12384, per=19.53%, avg=12384.00, stdev= 0.00, samples=1 00:09:54.870 iops : min= 3096, max= 3096, avg=3096.00, stdev= 0.00, samples=1 00:09:54.870 lat (usec) : 100=2.70%, 250=97.23%, 500=0.06%, 750=0.01% 00:09:54.870 cpu : usr=4.80%, sys=9.90%, ctx=6708, majf=0, minf=1 00:09:54.870 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.870 issued rwts: total=3124,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.870 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.870 job3: (groupid=0, jobs=1): err= 0: pid=4059646: Wed Nov 27 12:48:20 2024 00:09:54.870 read: IOPS=3147, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec) 00:09:54.870 slat (nsec): min=8469, max=29921, avg=9383.91, stdev=1199.66 00:09:54.870 clat (usec): min=75, max=327, avg=140.38, stdev=21.88 00:09:54.870 lat (usec): min=84, max=336, avg=149.76, stdev=21.91 00:09:54.870 clat percentiles (usec): 00:09:54.870 | 1.00th=[ 93], 5.00th=[ 115], 10.00th=[ 119], 20.00th=[ 122], 00:09:54.870 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 139], 60.00th=[ 149], 00:09:54.870 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 176], 00:09:54.870 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 253], 99.95th=[ 302], 00:09:54.870 | 99.99th=[ 326] 00:09:54.870 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:54.870 slat (nsec): min=10497, max=39649, avg=11504.93, stdev=1351.88 00:09:54.870 clat (usec): min=74, max=646, avg=131.24, stdev=22.76 00:09:54.870 lat (usec): min=85, max=657, avg=142.75, stdev=22.75 00:09:54.870 clat percentiles (usec): 00:09:54.870 | 1.00th=[ 86], 5.00th=[ 105], 10.00th=[ 109], 20.00th=[ 113], 00:09:54.870 | 30.00th=[ 116], 40.00th=[ 121], 50.00th=[ 133], 60.00th=[ 141], 00:09:54.870 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 163], 00:09:54.870 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 206], 99.95th=[ 355], 00:09:54.870 | 99.99th=[ 644] 00:09:54.870 bw ( KiB/s): min=12432, max=12432, per=19.60%, avg=12432.00, stdev= 0.00, samples=1 00:09:54.870 iops : min= 3108, max= 3108, avg=3108.00, stdev= 0.00, samples=1 00:09:54.870 lat (usec) : 100=2.55%, 250=97.36%, 500=0.07%, 750=0.01% 00:09:54.870 cpu : usr=4.70%, sys=8.90%, ctx=6735, majf=0, minf=1 00:09:54.870 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.870 issued rwts: total=3151,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.870 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.870 00:09:54.870 Run status group 0 (all jobs): 00:09:54.870 READ: bw=55.1MiB/s (57.8MB/s), 12.2MiB/s-18.3MiB/s (12.8MB/s-19.2MB/s), io=55.2MiB (57.9MB), run=1001-1001msec 00:09:54.870 WRITE: bw=61.9MiB/s (64.9MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-20.9MB/s), io=62.0MiB (65.0MB), run=1001-1001msec 00:09:54.870 00:09:54.870 Disk stats (read/write): 00:09:54.870 nvme0n1: ios=2609/2952, merge=0/0, ticks=354/376, in_queue=730, util=84.47% 00:09:54.870 nvme0n2: ios=4096/4495, merge=0/0, ticks=291/317, in_queue=608, util=85.51% 00:09:54.870 nvme0n3: ios=2560/2927, merge=0/0, ticks=344/365, in_queue=709, util=88.49% 00:09:54.870 nvme0n4: ios=2560/2941, merge=0/0, ticks=344/377, in_queue=721, util=89.53% 00:09:54.870 12:48:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:54.870 [global] 00:09:54.870 thread=1 00:09:54.870 invalidate=1 00:09:54.871 rw=write 00:09:54.871 time_based=1 00:09:54.871 runtime=1 00:09:54.871 ioengine=libaio 00:09:54.871 direct=1 00:09:54.871 bs=4096 00:09:54.871 iodepth=128 00:09:54.871 norandommap=0 00:09:54.871 numjobs=1 00:09:54.871 00:09:54.871 verify_dump=1 00:09:54.871 verify_backlog=512 00:09:54.871 verify_state_save=0 00:09:54.871 do_verify=1 00:09:54.871 verify=crc32c-intel 00:09:54.871 [job0] 00:09:54.871 filename=/dev/nvme0n1 00:09:54.871 [job1] 00:09:54.871 filename=/dev/nvme0n2 00:09:54.871 [job2] 00:09:54.871 filename=/dev/nvme0n3 00:09:54.871 [job3] 00:09:54.871 filename=/dev/nvme0n4 00:09:54.871 Could not set queue depth (nvme0n1) 00:09:54.871 Could not set queue depth (nvme0n2) 00:09:54.871 Could not set queue depth (nvme0n3) 00:09:54.871 Could not set queue depth (nvme0n4) 00:09:55.128 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.128 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.128 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.128 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.128 fio-3.35 00:09:55.128 Starting 4 threads 00:09:56.517 00:09:56.517 job0: (groupid=0, jobs=1): err= 0: pid=4060062: Wed Nov 27 12:48:22 2024 00:09:56.517 read: IOPS=7673, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec) 00:09:56.517 slat (nsec): min=1991, max=1854.9k, avg=63464.81, stdev=200345.75 00:09:56.517 clat (usec): min=413, max=13377, avg=8223.17, stdev=2597.42 00:09:56.517 lat (usec): min=1073, max=13380, avg=8286.64, stdev=2614.08 00:09:56.517 clat percentiles (usec): 00:09:56.517 | 1.00th=[ 5145], 5.00th=[ 5276], 10.00th=[ 5342], 20.00th=[ 5473], 00:09:56.517 | 30.00th=[ 5800], 40.00th=[ 6521], 50.00th=[ 7046], 60.00th=[10290], 00:09:56.517 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11207], 95.00th=[12125], 00:09:56.517 | 99.00th=[12649], 99.50th=[12780], 99.90th=[13042], 99.95th=[13042], 00:09:56.517 | 99.99th=[13435] 00:09:56.517 write: IOPS=8183, BW=32.0MiB/s (33.5MB/s)(32.0MiB/1001msec); 0 zone resets 00:09:56.517 slat (usec): min=2, max=1733, avg=59.43, stdev=182.93 00:09:56.517 clat (usec): min=1078, max=12694, avg=7754.75, stdev=2489.02 00:09:56.517 lat (usec): min=1087, max=12704, avg=7814.18, stdev=2504.79 00:09:56.517 clat percentiles (usec): 00:09:56.517 | 1.00th=[ 4817], 5.00th=[ 5014], 10.00th=[ 5014], 20.00th=[ 5145], 00:09:56.517 | 30.00th=[ 5538], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 9765], 00:09:56.517 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10683], 95.00th=[11338], 00:09:56.517 | 99.00th=[12387], 99.50th=[12518], 99.90th=[12649], 99.95th=[12649], 00:09:56.517 | 99.99th=[12649] 00:09:56.517 bw ( KiB/s): min=25272, max=25272, per=26.91%, avg=25272.00, stdev= 0.00, samples=1 00:09:56.517 iops : min= 6318, max= 6318, avg=6318.00, stdev= 0.00, samples=1 00:09:56.517 lat (usec) : 500=0.01% 00:09:56.517 lat (msec) : 2=0.18%, 4=0.21%, 10=59.21%, 20=40.40% 00:09:56.517 cpu : usr=3.60%, sys=5.69%, ctx=1741, majf=0, minf=1 00:09:56.517 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:56.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.517 issued rwts: total=7681,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.517 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.517 job1: (groupid=0, jobs=1): err= 0: pid=4060063: Wed Nov 27 12:48:22 2024 00:09:56.517 read: IOPS=5031, BW=19.7MiB/s (20.6MB/s)(19.7MiB/1003msec) 00:09:56.517 slat (usec): min=2, max=3935, avg=101.06, stdev=390.99 00:09:56.517 clat (usec): min=230, max=19434, avg=12970.69, stdev=3033.41 00:09:56.517 lat (usec): min=2741, max=19437, avg=13071.75, stdev=3029.29 00:09:56.517 clat percentiles (usec): 00:09:56.517 | 1.00th=[ 7242], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:09:56.517 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[12387], 00:09:56.517 | 70.00th=[14615], 80.00th=[15008], 90.00th=[18482], 95.00th=[18482], 00:09:56.517 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19268], 99.95th=[19268], 00:09:56.517 | 99.99th=[19530] 00:09:56.517 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:56.517 slat (usec): min=2, max=3918, avg=92.52, stdev=340.31 00:09:56.517 clat (usec): min=8030, max=19735, avg=11996.89, stdev=2666.19 00:09:56.517 lat (usec): min=8070, max=19738, avg=12089.41, stdev=2665.61 00:09:56.517 clat percentiles (usec): 00:09:56.517 | 1.00th=[ 9503], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10159], 00:09:56.517 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10421], 60.00th=[10945], 00:09:56.517 | 70.00th=[12387], 80.00th=[14484], 90.00th=[15401], 95.00th=[18482], 00:09:56.517 | 99.00th=[18744], 99.50th=[18744], 99.90th=[19268], 99.95th=[19530], 00:09:56.517 | 99.99th=[19792] 00:09:56.517 bw ( KiB/s): min=16384, max=24576, per=21.80%, avg=20480.00, stdev=5792.62, samples=2 00:09:56.517 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:09:56.517 lat (usec) : 250=0.01% 00:09:56.517 lat (msec) : 4=0.31%, 10=4.96%, 20=94.72% 00:09:56.517 cpu : usr=2.40%, sys=4.09%, ctx=1405, majf=0, minf=1 00:09:56.517 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:56.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.517 issued rwts: total=5047,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.517 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.517 job2: (groupid=0, jobs=1): err= 0: pid=4060070: Wed Nov 27 12:48:22 2024 00:09:56.517 read: IOPS=5341, BW=20.9MiB/s (21.9MB/s)(20.9MiB/1003msec) 00:09:56.517 slat (usec): min=2, max=3032, avg=88.93, stdev=320.05 00:09:56.517 clat (usec): min=2102, max=19583, avg=11594.27, stdev=3298.31 00:09:56.517 lat (usec): min=2728, max=19588, avg=11683.20, stdev=3315.38 00:09:56.517 clat percentiles (usec): 00:09:56.517 | 1.00th=[ 5014], 5.00th=[ 7439], 10.00th=[ 8029], 20.00th=[ 8291], 00:09:56.517 | 30.00th=[ 8455], 40.00th=[11207], 50.00th=[11600], 60.00th=[12256], 00:09:56.517 | 70.00th=[12911], 80.00th=[13304], 90.00th=[18220], 95.00th=[18482], 00:09:56.517 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19006], 99.95th=[19268], 00:09:56.517 | 99.99th=[19530] 00:09:56.517 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:09:56.517 slat (usec): min=2, max=3195, avg=89.13, stdev=327.37 00:09:56.517 clat (usec): min=6385, max=19020, avg=11476.26, stdev=3293.73 00:09:56.517 lat (usec): min=6454, max=19023, avg=11565.39, stdev=3310.91 00:09:56.517 clat percentiles (usec): 00:09:56.517 | 1.00th=[ 6915], 5.00th=[ 7439], 10.00th=[ 7635], 20.00th=[ 7832], 00:09:56.517 | 30.00th=[10028], 40.00th=[11076], 50.00th=[11338], 60.00th=[11731], 00:09:56.518 | 70.00th=[12518], 80.00th=[12649], 90.00th=[18220], 95.00th=[18482], 00:09:56.518 | 99.00th=[19006], 99.50th=[19006], 99.90th=[19006], 99.95th=[19006], 00:09:56.518 | 99.99th=[19006] 00:09:56.518 bw ( KiB/s): min=20480, max=24576, per=23.98%, avg=22528.00, stdev=2896.31, samples=2 00:09:56.518 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:09:56.518 lat (msec) : 4=0.09%, 10=31.11%, 20=68.80% 00:09:56.518 cpu : usr=2.69%, sys=4.99%, ctx=1161, majf=0, minf=1 00:09:56.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:56.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.518 issued rwts: total=5358,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.518 job3: (groupid=0, jobs=1): err= 0: pid=4060071: Wed Nov 27 12:48:22 2024 00:09:56.518 read: IOPS=4154, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1003msec) 00:09:56.518 slat (usec): min=2, max=4006, avg=113.46, stdev=446.00 00:09:56.518 clat (usec): min=1738, max=18963, avg=14435.90, stdev=2223.83 00:09:56.518 lat (usec): min=4531, max=18969, avg=14549.36, stdev=2197.49 00:09:56.518 clat percentiles (usec): 00:09:56.518 | 1.00th=[ 8356], 5.00th=[11863], 10.00th=[12125], 20.00th=[12911], 00:09:56.518 | 30.00th=[13304], 40.00th=[13566], 50.00th=[14484], 60.00th=[14746], 00:09:56.518 | 70.00th=[15008], 80.00th=[15401], 90.00th=[18482], 95.00th=[18744], 00:09:56.518 | 99.00th=[19006], 99.50th=[19006], 99.90th=[19006], 99.95th=[19006], 00:09:56.518 | 99.99th=[19006] 00:09:56.518 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:09:56.518 slat (usec): min=2, max=3809, avg=110.40, stdev=433.40 00:09:56.518 clat (usec): min=9218, max=18811, avg=14457.69, stdev=2129.13 00:09:56.518 lat (usec): min=9944, max=18818, avg=14568.09, stdev=2107.83 00:09:56.518 clat percentiles (usec): 00:09:56.518 | 1.00th=[11207], 5.00th=[11731], 10.00th=[12125], 20.00th=[12518], 00:09:56.518 | 30.00th=[12649], 40.00th=[13960], 50.00th=[14484], 60.00th=[14746], 00:09:56.518 | 70.00th=[14877], 80.00th=[15533], 90.00th=[18482], 95.00th=[18482], 00:09:56.518 | 99.00th=[18744], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:09:56.518 | 99.99th=[18744] 00:09:56.518 bw ( KiB/s): min=17416, max=19000, per=19.39%, avg=18208.00, stdev=1120.06, samples=2 00:09:56.518 iops : min= 4354, max= 4750, avg=4552.00, stdev=280.01, samples=2 00:09:56.518 lat (msec) : 2=0.01%, 10=0.91%, 20=99.08% 00:09:56.518 cpu : usr=2.99%, sys=3.79%, ctx=800, majf=0, minf=1 00:09:56.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:56.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.518 issued rwts: total=4167,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.518 00:09:56.518 Run status group 0 (all jobs): 00:09:56.518 READ: bw=86.7MiB/s (90.9MB/s), 16.2MiB/s-30.0MiB/s (17.0MB/s-31.4MB/s), io=86.9MiB (91.1MB), run=1001-1003msec 00:09:56.518 WRITE: bw=91.7MiB/s (96.2MB/s), 17.9MiB/s-32.0MiB/s (18.8MB/s-33.5MB/s), io=92.0MiB (96.5MB), run=1001-1003msec 00:09:56.518 00:09:56.518 Disk stats (read/write): 00:09:56.518 nvme0n1: ios=6193/6426, merge=0/0, ticks=13293/12983, in_queue=26276, util=84.14% 00:09:56.518 nvme0n2: ios=4096/4566, merge=0/0, ticks=13046/13242, in_queue=26288, util=85.17% 00:09:56.518 nvme0n3: ios=4096/4142, merge=0/0, ticks=14982/14758, in_queue=29740, util=88.43% 00:09:56.518 nvme0n4: ios=3584/3703, merge=0/0, ticks=13053/13129, in_queue=26182, util=89.47% 00:09:56.518 12:48:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:56.518 [global] 00:09:56.518 thread=1 00:09:56.518 invalidate=1 00:09:56.518 rw=randwrite 00:09:56.518 time_based=1 00:09:56.518 runtime=1 00:09:56.518 ioengine=libaio 00:09:56.518 direct=1 00:09:56.518 bs=4096 00:09:56.518 iodepth=128 00:09:56.518 norandommap=0 00:09:56.518 numjobs=1 00:09:56.518 00:09:56.518 verify_dump=1 00:09:56.518 verify_backlog=512 00:09:56.518 verify_state_save=0 00:09:56.518 do_verify=1 00:09:56.518 verify=crc32c-intel 00:09:56.518 [job0] 00:09:56.518 filename=/dev/nvme0n1 00:09:56.518 [job1] 00:09:56.518 filename=/dev/nvme0n2 00:09:56.518 [job2] 00:09:56.518 filename=/dev/nvme0n3 00:09:56.518 [job3] 00:09:56.518 filename=/dev/nvme0n4 00:09:56.518 Could not set queue depth (nvme0n1) 00:09:56.518 Could not set queue depth (nvme0n2) 00:09:56.518 Could not set queue depth (nvme0n3) 00:09:56.518 Could not set queue depth (nvme0n4) 00:09:56.777 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.777 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.777 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.777 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.777 fio-3.35 00:09:56.777 Starting 4 threads 00:09:58.158 00:09:58.158 job0: (groupid=0, jobs=1): err= 0: pid=4060494: Wed Nov 27 12:48:24 2024 00:09:58.158 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:09:58.158 slat (usec): min=2, max=979, avg=104.32, stdev=263.45 00:09:58.158 clat (usec): min=11961, max=14844, avg=13500.60, stdev=399.64 00:09:58.158 lat (usec): min=12197, max=14928, avg=13604.92, stdev=389.78 00:09:58.158 clat percentiles (usec): 00:09:58.158 | 1.00th=[12387], 5.00th=[12649], 10.00th=[12911], 20.00th=[13173], 00:09:58.158 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13566], 60.00th=[13698], 00:09:58.158 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[13960], 00:09:58.158 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14746], 99.95th=[14746], 00:09:58.158 | 99.99th=[14877] 00:09:58.158 write: IOPS=5095, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:58.158 slat (usec): min=2, max=1560, avg=98.00, stdev=250.82 00:09:58.158 clat (usec): min=2044, max=15959, avg=12662.40, stdev=1069.95 00:09:58.158 lat (usec): min=2784, max=15963, avg=12760.40, stdev=1067.53 00:09:58.158 clat percentiles (usec): 00:09:58.158 | 1.00th=[ 6849], 5.00th=[11731], 10.00th=[11994], 20.00th=[12256], 00:09:58.158 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12780], 60.00th=[12911], 00:09:58.158 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13173], 95.00th=[13566], 00:09:58.158 | 99.00th=[15270], 99.50th=[15795], 99.90th=[15926], 99.95th=[15926], 00:09:58.158 | 99.99th=[15926] 00:09:58.158 bw ( KiB/s): min=19392, max=20480, per=18.91%, avg=19936.00, stdev=769.33, samples=2 00:09:58.158 iops : min= 4848, max= 5120, avg=4984.00, stdev=192.33, samples=2 00:09:58.158 lat (msec) : 4=0.20%, 10=0.75%, 20=99.05% 00:09:58.159 cpu : usr=2.20%, sys=4.59%, ctx=1501, majf=0, minf=1 00:09:58.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:58.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.159 issued rwts: total=4608,5111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.159 job1: (groupid=0, jobs=1): err= 0: pid=4060495: Wed Nov 27 12:48:24 2024 00:09:58.159 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:09:58.159 slat (usec): min=2, max=1001, avg=104.06, stdev=263.45 00:09:58.159 clat (usec): min=12081, max=14846, avg=13490.28, stdev=416.60 00:09:58.159 lat (usec): min=12090, max=14940, avg=13594.34, stdev=400.42 00:09:58.159 clat percentiles (usec): 00:09:58.159 | 1.00th=[12387], 5.00th=[12649], 10.00th=[12911], 20.00th=[13173], 00:09:58.159 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698], 00:09:58.159 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[13960], 00:09:58.159 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14746], 99.95th=[14877], 00:09:58.159 | 99.99th=[14877] 00:09:58.159 write: IOPS=5093, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:58.159 slat (usec): min=2, max=2115, avg=98.35, stdev=252.45 00:09:58.159 clat (usec): min=2041, max=15950, avg=12677.90, stdev=1055.96 00:09:58.159 lat (usec): min=2824, max=15953, avg=12776.25, stdev=1053.95 00:09:58.159 clat percentiles (usec): 00:09:58.159 | 1.00th=[ 7635], 5.00th=[11731], 10.00th=[11994], 20.00th=[12256], 00:09:58.159 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12780], 60.00th=[12911], 00:09:58.159 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[13566], 00:09:58.159 | 99.00th=[15270], 99.50th=[15795], 99.90th=[15795], 99.95th=[15926], 00:09:58.159 | 99.99th=[15926] 00:09:58.159 bw ( KiB/s): min=19376, max=20480, per=18.91%, avg=19928.00, stdev=780.65, samples=2 00:09:58.159 iops : min= 4844, max= 5120, avg=4982.00, stdev=195.16, samples=2 00:09:58.159 lat (msec) : 4=0.14%, 10=0.80%, 20=99.05% 00:09:58.159 cpu : usr=2.50%, sys=4.19%, ctx=1512, majf=0, minf=1 00:09:58.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:58.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.159 issued rwts: total=4608,5109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.159 job2: (groupid=0, jobs=1): err= 0: pid=4060499: Wed Nov 27 12:48:24 2024 00:09:58.159 read: IOPS=8113, BW=31.7MiB/s (33.2MB/s)(31.8MiB/1002msec) 00:09:58.159 slat (usec): min=2, max=1162, avg=61.03, stdev=221.56 00:09:58.159 clat (usec): min=369, max=13047, avg=8006.39, stdev=1036.36 00:09:58.159 lat (usec): min=1106, max=13049, avg=8067.42, stdev=1052.06 00:09:58.159 clat percentiles (usec): 00:09:58.159 | 1.00th=[ 2343], 5.00th=[ 7373], 10.00th=[ 7570], 20.00th=[ 7701], 00:09:58.159 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8225], 00:09:58.159 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8586], 95.00th=[ 9110], 00:09:58.159 | 99.00th=[ 9896], 99.50th=[10814], 99.90th=[11863], 99.95th=[13042], 00:09:58.159 | 99.99th=[13042] 00:09:58.159 write: IOPS=8175, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1002msec); 0 zone resets 00:09:58.159 slat (usec): min=2, max=1798, avg=56.76, stdev=206.50 00:09:58.159 clat (usec): min=273, max=9780, avg=7560.86, stdev=963.45 00:09:58.159 lat (usec): min=334, max=9783, avg=7617.62, stdev=981.92 00:09:58.159 clat percentiles (usec): 00:09:58.159 | 1.00th=[ 1795], 5.00th=[ 7111], 10.00th=[ 7242], 20.00th=[ 7373], 00:09:58.159 | 30.00th=[ 7439], 40.00th=[ 7504], 50.00th=[ 7570], 60.00th=[ 7701], 00:09:58.159 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8291], 95.00th=[ 8455], 00:09:58.159 | 99.00th=[ 8848], 99.50th=[ 8979], 99.90th=[ 9241], 99.95th=[ 9634], 00:09:58.159 | 99.99th=[ 9765] 00:09:58.159 bw ( KiB/s): min=32848, max=32848, per=31.17%, avg=32848.00, stdev= 0.00, samples=1 00:09:58.159 iops : min= 8212, max= 8212, avg=8212.00, stdev= 0.00, samples=1 00:09:58.159 lat (usec) : 500=0.04%, 750=0.05%, 1000=0.03% 00:09:58.159 lat (msec) : 2=0.83%, 4=1.17%, 10=97.39%, 20=0.48% 00:09:58.159 cpu : usr=3.20%, sys=7.09%, ctx=1057, majf=0, minf=1 00:09:58.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:58.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.159 issued rwts: total=8130,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.159 job3: (groupid=0, jobs=1): err= 0: pid=4060501: Wed Nov 27 12:48:24 2024 00:09:58.159 read: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec) 00:09:58.159 slat (usec): min=2, max=1281, avg=61.88, stdev=223.34 00:09:58.159 clat (usec): min=6953, max=9863, avg=8078.02, stdev=446.17 00:09:58.159 lat (usec): min=6962, max=9869, avg=8139.89, stdev=482.73 00:09:58.159 clat percentiles (usec): 00:09:58.159 | 1.00th=[ 7308], 5.00th=[ 7439], 10.00th=[ 7570], 20.00th=[ 7701], 00:09:58.159 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8160], 00:09:58.159 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:09:58.159 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[ 9634], 99.95th=[ 9765], 00:09:58.159 | 99.99th=[ 9896] 00:09:58.159 write: IOPS=8000, BW=31.3MiB/s (32.8MB/s)(31.3MiB/1002msec); 0 zone resets 00:09:58.159 slat (usec): min=2, max=5209, avg=62.05, stdev=236.18 00:09:58.159 clat (usec): min=675, max=31372, avg=7995.12, stdev=2357.78 00:09:58.159 lat (usec): min=1551, max=32565, avg=8057.17, stdev=2378.27 00:09:58.159 clat percentiles (usec): 00:09:58.159 | 1.00th=[ 5800], 5.00th=[ 7242], 10.00th=[ 7373], 20.00th=[ 7439], 00:09:58.159 | 30.00th=[ 7504], 40.00th=[ 7570], 50.00th=[ 7635], 60.00th=[ 7767], 00:09:58.159 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8717], 00:09:58.159 | 99.00th=[24773], 99.50th=[28967], 99.90th=[30278], 99.95th=[31327], 00:09:58.159 | 99.99th=[31327] 00:09:58.159 bw ( KiB/s): min=32120, max=32120, per=30.47%, avg=32120.00, stdev= 0.00, samples=1 00:09:58.159 iops : min= 8030, max= 8030, avg=8030.00, stdev= 0.00, samples=1 00:09:58.159 lat (usec) : 750=0.01% 00:09:58.159 lat (msec) : 2=0.10%, 4=0.20%, 10=98.67%, 20=0.36%, 50=0.66% 00:09:58.159 cpu : usr=3.40%, sys=6.39%, ctx=1076, majf=0, minf=1 00:09:58.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:58.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.159 issued rwts: total=7680,8017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.159 00:09:58.159 Run status group 0 (all jobs): 00:09:58.159 READ: bw=97.5MiB/s (102MB/s), 17.9MiB/s-31.7MiB/s (18.8MB/s-33.2MB/s), io=97.8MiB (103MB), run=1002-1003msec 00:09:58.159 WRITE: bw=103MiB/s (108MB/s), 19.9MiB/s-31.9MiB/s (20.9MB/s-33.5MB/s), io=103MiB (108MB), run=1002-1003msec 00:09:58.159 00:09:58.159 Disk stats (read/write): 00:09:58.159 nvme0n1: ios=4001/4096, merge=0/0, ticks=17587/17121, in_queue=34708, util=84.27% 00:09:58.159 nvme0n2: ios=3960/4096, merge=0/0, ticks=17608/17096, in_queue=34704, util=85.20% 00:09:58.159 nvme0n3: ios=6656/7001, merge=0/0, ticks=13978/14869, in_queue=28847, util=88.36% 00:09:58.159 nvme0n4: ios=6372/6656, merge=0/0, ticks=12629/13380, in_queue=26009, util=89.40% 00:09:58.159 12:48:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:58.159 12:48:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4060761 00:09:58.159 12:48:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:58.159 12:48:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:58.159 [global] 00:09:58.159 thread=1 00:09:58.159 invalidate=1 00:09:58.159 rw=read 00:09:58.159 time_based=1 00:09:58.159 runtime=10 00:09:58.159 ioengine=libaio 00:09:58.159 direct=1 00:09:58.159 bs=4096 00:09:58.159 iodepth=1 00:09:58.159 norandommap=1 00:09:58.159 numjobs=1 00:09:58.159 00:09:58.159 [job0] 00:09:58.159 filename=/dev/nvme0n1 00:09:58.159 [job1] 00:09:58.159 filename=/dev/nvme0n2 00:09:58.159 [job2] 00:09:58.159 filename=/dev/nvme0n3 00:09:58.159 [job3] 00:09:58.159 filename=/dev/nvme0n4 00:09:58.159 Could not set queue depth (nvme0n1) 00:09:58.159 Could not set queue depth (nvme0n2) 00:09:58.159 Could not set queue depth (nvme0n3) 00:09:58.159 Could not set queue depth (nvme0n4) 00:09:58.418 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.418 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.418 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.418 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.418 fio-3.35 00:09:58.418 Starting 4 threads 00:10:00.953 12:48:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:01.212 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=77987840, buflen=4096 00:10:01.213 fio: pid=4060926, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:01.213 12:48:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:01.213 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=85262336, buflen=4096 00:10:01.213 fio: pid=4060925, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:01.213 12:48:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.213 12:48:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:01.472 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=22503424, buflen=4096 00:10:01.472 fio: pid=4060923, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:01.472 12:48:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.472 12:48:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:01.731 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=36618240, buflen=4096 00:10:01.731 fio: pid=4060924, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:01.731 12:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.731 12:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:01.731 00:10:01.731 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4060923: Wed Nov 27 12:48:28 2024 00:10:01.731 read: IOPS=7204, BW=28.1MiB/s (29.5MB/s)(85.5MiB/3037msec) 00:10:01.732 slat (usec): min=6, max=27768, avg=12.12, stdev=242.25 00:10:01.732 clat (usec): min=47, max=1308, avg=124.88, stdev=27.77 00:10:01.732 lat (usec): min=60, max=27951, avg=137.00, stdev=243.93 00:10:01.732 clat percentiles (usec): 00:10:01.732 | 1.00th=[ 60], 5.00th=[ 75], 10.00th=[ 83], 20.00th=[ 106], 00:10:01.732 | 30.00th=[ 121], 40.00th=[ 126], 50.00th=[ 130], 60.00th=[ 133], 00:10:01.732 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 151], 95.00th=[ 167], 00:10:01.732 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 202], 99.95th=[ 212], 00:10:01.732 | 99.99th=[ 269] 00:10:01.732 bw ( KiB/s): min=25968, max=29760, per=26.28%, avg=27942.40, stdev=1594.73, samples=5 00:10:01.732 iops : min= 6492, max= 7440, avg=6985.60, stdev=398.68, samples=5 00:10:01.732 lat (usec) : 50=0.01%, 100=19.14%, 250=80.83%, 500=0.01% 00:10:01.732 lat (msec) : 2=0.01% 00:10:01.732 cpu : usr=3.42%, sys=9.88%, ctx=21883, majf=0, minf=1 00:10:01.732 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.732 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.732 issued rwts: total=21879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.732 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.732 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4060924: Wed Nov 27 12:48:28 2024 00:10:01.732 read: IOPS=7732, BW=30.2MiB/s (31.7MB/s)(98.9MiB/3275msec) 00:10:01.732 slat (usec): min=5, max=16792, avg=12.31, stdev=201.90 00:10:01.732 clat (usec): min=42, max=22180, avg=114.62, stdev=145.18 00:10:01.732 lat (usec): min=57, max=22188, avg=126.93, stdev=248.36 00:10:01.732 clat percentiles (usec): 00:10:01.732 | 1.00th=[ 55], 5.00th=[ 60], 10.00th=[ 64], 20.00th=[ 78], 00:10:01.732 | 30.00th=[ 91], 40.00th=[ 117], 50.00th=[ 124], 60.00th=[ 128], 00:10:01.732 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 157], 00:10:01.732 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 200], 99.95th=[ 208], 00:10:01.732 | 99.99th=[ 775] 00:10:01.732 bw ( KiB/s): min=27216, max=34513, per=27.94%, avg=29710.83, stdev=2591.83, samples=6 00:10:01.732 iops : min= 6804, max= 8628, avg=7427.67, stdev=647.87, samples=6 00:10:01.732 lat (usec) : 50=0.02%, 100=32.27%, 250=67.68%, 500=0.01%, 1000=0.01% 00:10:01.732 lat (msec) : 10=0.01%, 50=0.01% 00:10:01.732 cpu : usr=3.45%, sys=10.87%, ctx=25332, majf=0, minf=2 00:10:01.732 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.732 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.732 issued rwts: total=25325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.732 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.732 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4060925: Wed Nov 27 12:48:28 2024 00:10:01.732 read: IOPS=7293, BW=28.5MiB/s (29.9MB/s)(81.3MiB/2854msec) 00:10:01.732 slat (usec): min=8, max=15774, avg=10.60, stdev=141.10 00:10:01.732 clat (usec): min=73, max=344, avg=124.05, stdev=22.87 00:10:01.732 lat (usec): min=81, max=15912, avg=134.65, stdev=143.11 00:10:01.732 clat percentiles (usec): 00:10:01.732 | 1.00th=[ 83], 5.00th=[ 88], 10.00th=[ 91], 20.00th=[ 96], 00:10:01.732 | 30.00th=[ 105], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 135], 00:10:01.732 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:10:01.732 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 186], 99.95th=[ 192], 00:10:01.732 | 99.99th=[ 273] 00:10:01.732 bw ( KiB/s): min=27272, max=38656, per=27.80%, avg=29558.40, stdev=5085.73, samples=5 00:10:01.732 iops : min= 6818, max= 9664, avg=7389.60, stdev=1271.43, samples=5 00:10:01.732 lat (usec) : 100=26.52%, 250=73.46%, 500=0.01% 00:10:01.732 cpu : usr=3.47%, sys=10.52%, ctx=20820, majf=0, minf=2 00:10:01.732 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.732 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.732 issued rwts: total=20817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.732 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.732 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4060926: Wed Nov 27 12:48:28 2024 00:10:01.732 read: IOPS=7174, BW=28.0MiB/s (29.4MB/s)(74.4MiB/2654msec) 00:10:01.732 slat (nsec): min=8412, max=39500, avg=9067.49, stdev=874.89 00:10:01.732 clat (usec): min=75, max=214, avg=127.54, stdev=13.20 00:10:01.732 lat (usec): min=83, max=223, avg=136.60, stdev=13.22 00:10:01.732 clat percentiles (usec): 00:10:01.732 | 1.00th=[ 96], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 120], 00:10:01.732 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 128], 00:10:01.732 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 145], 95.00th=[ 155], 00:10:01.732 | 99.00th=[ 169], 99.50th=[ 182], 99.90th=[ 200], 99.95th=[ 204], 00:10:01.732 | 99.99th=[ 212] 00:10:01.732 bw ( KiB/s): min=25968, max=29808, per=27.28%, avg=29011.20, stdev=1701.33, samples=5 00:10:01.732 iops : min= 6492, max= 7452, avg=7252.80, stdev=425.33, samples=5 00:10:01.732 lat (usec) : 100=1.28%, 250=98.72% 00:10:01.732 cpu : usr=3.28%, sys=10.40%, ctx=19041, majf=0, minf=2 00:10:01.732 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.732 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.732 issued rwts: total=19041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.732 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.732 00:10:01.732 Run status group 0 (all jobs): 00:10:01.732 READ: bw=104MiB/s (109MB/s), 28.0MiB/s-30.2MiB/s (29.4MB/s-31.7MB/s), io=340MiB (357MB), run=2654-3275msec 00:10:01.732 00:10:01.732 Disk stats (read/write): 00:10:01.732 nvme0n1: ios=19942/0, merge=0/0, ticks=2406/0, in_queue=2406, util=93.55% 00:10:01.732 nvme0n2: ios=23007/0, merge=0/0, ticks=2574/0, in_queue=2574, util=93.52% 00:10:01.732 nvme0n3: ios=20816/0, merge=0/0, ticks=2405/0, in_queue=2405, util=95.48% 00:10:01.732 nvme0n4: ios=18766/0, merge=0/0, ticks=2241/0, in_queue=2241, util=96.46% 00:10:01.992 12:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.992 12:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:02.252 12:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.252 12:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:02.512 12:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.512 12:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:02.772 12:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.772 12:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:02.772 12:48:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:02.772 12:48:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 4060761 00:10:02.772 12:48:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:02.772 12:48:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.711 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:03.711 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:03.711 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:03.711 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.711 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:03.711 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.711 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:03.711 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:03.711 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:03.711 nvmf hotplug test: fio failed as expected 00:10:03.711 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.970 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:03.970 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:03.970 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:03.970 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:03.970 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:03.970 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.970 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:03.970 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:03.970 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:03.970 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:03.970 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.971 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:03.971 rmmod nvme_rdma 00:10:03.971 rmmod nvme_fabrics 00:10:03.971 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.971 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:03.971 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:03.971 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 4057667 ']' 00:10:03.971 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 4057667 00:10:03.971 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 4057667 ']' 00:10:03.971 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 4057667 00:10:03.971 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:04.231 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.231 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4057667 00:10:04.231 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.231 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.231 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4057667' 00:10:04.231 killing process with pid 4057667 00:10:04.231 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 4057667 00:10:04.231 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 4057667 00:10:04.494 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:04.494 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:04.494 00:10:04.494 real 0m28.495s 00:10:04.494 user 2m12.453s 00:10:04.494 sys 0m11.147s 00:10:04.494 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.494 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.494 ************************************ 00:10:04.494 END TEST nvmf_fio_target 00:10:04.494 ************************************ 00:10:04.494 12:48:30 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:04.494 12:48:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.494 12:48:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.494 12:48:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.494 ************************************ 00:10:04.494 START TEST nvmf_bdevio 00:10:04.494 ************************************ 00:10:04.494 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:04.494 * Looking for test storage... 00:10:04.494 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:04.494 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:04.494 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:04.494 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.819 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:04.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.820 --rc genhtml_branch_coverage=1 00:10:04.820 --rc genhtml_function_coverage=1 00:10:04.820 --rc genhtml_legend=1 00:10:04.820 --rc geninfo_all_blocks=1 00:10:04.820 --rc geninfo_unexecuted_blocks=1 00:10:04.820 00:10:04.820 ' 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:04.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.820 --rc genhtml_branch_coverage=1 00:10:04.820 --rc genhtml_function_coverage=1 00:10:04.820 --rc genhtml_legend=1 00:10:04.820 --rc geninfo_all_blocks=1 00:10:04.820 --rc geninfo_unexecuted_blocks=1 00:10:04.820 00:10:04.820 ' 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:04.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.820 --rc genhtml_branch_coverage=1 00:10:04.820 --rc genhtml_function_coverage=1 00:10:04.820 --rc genhtml_legend=1 00:10:04.820 --rc geninfo_all_blocks=1 00:10:04.820 --rc geninfo_unexecuted_blocks=1 00:10:04.820 00:10:04.820 ' 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:04.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.820 --rc genhtml_branch_coverage=1 00:10:04.820 --rc genhtml_function_coverage=1 00:10:04.820 --rc genhtml_legend=1 00:10:04.820 --rc geninfo_all_blocks=1 00:10:04.820 --rc geninfo_unexecuted_blocks=1 00:10:04.820 00:10:04.820 ' 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.820 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:04.820 12:48:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:13.076 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:13.076 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:13.076 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:13.076 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:13.076 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:13.077 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:13.077 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:13.077 altname enp217s0f0np0 00:10:13.077 altname ens818f0np0 00:10:13.077 inet 192.168.100.8/24 scope global mlx_0_0 00:10:13.077 valid_lft forever preferred_lft forever 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:13.077 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:13.077 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:13.077 altname enp217s0f1np1 00:10:13.077 altname ens818f1np1 00:10:13.077 inet 192.168.100.9/24 scope global mlx_0_1 00:10:13.077 valid_lft forever preferred_lft forever 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:13.077 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:13.337 192.168.100.9' 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:13.337 192.168.100.9' 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:13.337 192.168.100.9' 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=4066194 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 4066194 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 4066194 ']' 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.337 12:48:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:13.337 [2024-11-27 12:48:39.599251] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:10:13.337 [2024-11-27 12:48:39.599302] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.337 [2024-11-27 12:48:39.688777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.597 [2024-11-27 12:48:39.729699] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.598 [2024-11-27 12:48:39.729736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.598 [2024-11-27 12:48:39.729745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.598 [2024-11-27 12:48:39.729754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.598 [2024-11-27 12:48:39.729762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.598 [2024-11-27 12:48:39.731596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:13.598 [2024-11-27 12:48:39.731706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:13.598 [2024-11-27 12:48:39.731817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.598 [2024-11-27 12:48:39.731818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:14.165 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.165 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:14.165 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:14.165 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:14.165 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.165 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.165 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:14.165 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.165 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.165 [2024-11-27 12:48:40.504197] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10506f0/0x1054be0) succeed. 00:10:14.165 [2024-11-27 12:48:40.513506] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1051d80/0x1096280) succeed. 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.425 Malloc0 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.425 [2024-11-27 12:48:40.700070] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:14.425 { 00:10:14.425 "params": { 00:10:14.425 "name": "Nvme$subsystem", 00:10:14.425 "trtype": "$TEST_TRANSPORT", 00:10:14.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.425 "adrfam": "ipv4", 00:10:14.425 "trsvcid": "$NVMF_PORT", 00:10:14.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.425 "hdgst": ${hdgst:-false}, 00:10:14.425 "ddgst": ${ddgst:-false} 00:10:14.425 }, 00:10:14.425 "method": "bdev_nvme_attach_controller" 00:10:14.425 } 00:10:14.425 EOF 00:10:14.425 )") 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:14.425 12:48:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:14.425 "params": { 00:10:14.425 "name": "Nvme1", 00:10:14.425 "trtype": "rdma", 00:10:14.425 "traddr": "192.168.100.8", 00:10:14.425 "adrfam": "ipv4", 00:10:14.425 "trsvcid": "4420", 00:10:14.425 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:14.425 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:14.425 "hdgst": false, 00:10:14.425 "ddgst": false 00:10:14.425 }, 00:10:14.425 "method": "bdev_nvme_attach_controller" 00:10:14.425 }' 00:10:14.425 [2024-11-27 12:48:40.753977] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:10:14.425 [2024-11-27 12:48:40.754026] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4066363 ] 00:10:14.685 [2024-11-27 12:48:40.845569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:14.685 [2024-11-27 12:48:40.888172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.685 [2024-11-27 12:48:40.888269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.685 [2024-11-27 12:48:40.888269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.685 I/O targets: 00:10:14.685 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:14.685 00:10:14.685 00:10:14.685 CUnit - A unit testing framework for C - Version 2.1-3 00:10:14.685 http://cunit.sourceforge.net/ 00:10:14.685 00:10:14.685 00:10:14.685 Suite: bdevio tests on: Nvme1n1 00:10:14.945 Test: blockdev write read block ...passed 00:10:14.945 Test: blockdev write zeroes read block ...passed 00:10:14.945 Test: blockdev write zeroes read no split ...passed 00:10:14.945 Test: blockdev write zeroes read split ...passed 00:10:14.945 Test: blockdev write zeroes read split partial ...passed 00:10:14.945 Test: blockdev reset ...[2024-11-27 12:48:41.095173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:14.945 [2024-11-27 12:48:41.117948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:10:14.945 [2024-11-27 12:48:41.144897] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:14.945 passed 00:10:14.945 Test: blockdev write read 8 blocks ...passed 00:10:14.945 Test: blockdev write read size > 128k ...passed 00:10:14.945 Test: blockdev write read invalid size ...passed 00:10:14.945 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:14.945 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:14.945 Test: blockdev write read max offset ...passed 00:10:14.945 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:14.945 Test: blockdev writev readv 8 blocks ...passed 00:10:14.945 Test: blockdev writev readv 30 x 1block ...passed 00:10:14.945 Test: blockdev writev readv block ...passed 00:10:14.945 Test: blockdev writev readv size > 128k ...passed 00:10:14.945 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:14.945 Test: blockdev comparev and writev ...[2024-11-27 12:48:41.147896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:14.945 [2024-11-27 12:48:41.147925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:14.945 [2024-11-27 12:48:41.147938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:14.945 [2024-11-27 12:48:41.147948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:14.945 [2024-11-27 12:48:41.148116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:14.945 [2024-11-27 12:48:41.148128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:14.945 [2024-11-27 12:48:41.148139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:14.945 [2024-11-27 12:48:41.148148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:14.945 [2024-11-27 12:48:41.148287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:14.945 [2024-11-27 12:48:41.148298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:14.945 [2024-11-27 12:48:41.148309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:14.945 [2024-11-27 12:48:41.148317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:14.945 [2024-11-27 12:48:41.148494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:14.945 [2024-11-27 12:48:41.148508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:14.945 [2024-11-27 12:48:41.148519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:14.945 [2024-11-27 12:48:41.148528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:14.945 passed 00:10:14.945 Test: blockdev nvme passthru rw ...passed 00:10:14.945 Test: blockdev nvme passthru vendor specific ...[2024-11-27 12:48:41.148804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:14.945 [2024-11-27 12:48:41.148817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:14.945 [2024-11-27 12:48:41.148858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:14.946 [2024-11-27 12:48:41.148870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:14.946 [2024-11-27 12:48:41.148915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:14.946 [2024-11-27 12:48:41.148925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:14.946 [2024-11-27 12:48:41.148972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:14.946 [2024-11-27 12:48:41.148983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:14.946 passed 00:10:14.946 Test: blockdev nvme admin passthru ...passed 00:10:14.946 Test: blockdev copy ...passed 00:10:14.946 00:10:14.946 Run Summary: Type Total Ran Passed Failed Inactive 00:10:14.946 suites 1 1 n/a 0 0 00:10:14.946 tests 23 23 23 0 0 00:10:14.946 asserts 152 152 152 0 n/a 00:10:14.946 00:10:14.946 Elapsed time = 0.172 seconds 00:10:14.946 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.946 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.946 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.946 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.946 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:14.946 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:14.946 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:14.946 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:15.205 rmmod nvme_rdma 00:10:15.205 rmmod nvme_fabrics 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 4066194 ']' 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 4066194 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 4066194 ']' 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 4066194 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4066194 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4066194' 00:10:15.205 killing process with pid 4066194 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 4066194 00:10:15.205 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 4066194 00:10:15.466 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:15.466 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:15.466 00:10:15.466 real 0m10.967s 00:10:15.466 user 0m11.323s 00:10:15.466 sys 0m7.244s 00:10:15.466 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.466 12:48:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.466 ************************************ 00:10:15.466 END TEST nvmf_bdevio 00:10:15.466 ************************************ 00:10:15.466 12:48:41 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:15.466 00:10:15.466 real 4m36.925s 00:10:15.466 user 11m15.152s 00:10:15.466 sys 1m52.538s 00:10:15.466 12:48:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.466 12:48:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.466 ************************************ 00:10:15.466 END TEST nvmf_target_core 00:10:15.466 ************************************ 00:10:15.466 12:48:41 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:10:15.466 12:48:41 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.466 12:48:41 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.466 12:48:41 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:15.466 ************************************ 00:10:15.466 START TEST nvmf_target_extra 00:10:15.466 ************************************ 00:10:15.466 12:48:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:10:15.727 * Looking for test storage... 00:10:15.727 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:15.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.727 --rc genhtml_branch_coverage=1 00:10:15.727 --rc genhtml_function_coverage=1 00:10:15.727 --rc genhtml_legend=1 00:10:15.727 --rc geninfo_all_blocks=1 00:10:15.727 --rc geninfo_unexecuted_blocks=1 00:10:15.727 00:10:15.727 ' 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:15.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.727 --rc genhtml_branch_coverage=1 00:10:15.727 --rc genhtml_function_coverage=1 00:10:15.727 --rc genhtml_legend=1 00:10:15.727 --rc geninfo_all_blocks=1 00:10:15.727 --rc geninfo_unexecuted_blocks=1 00:10:15.727 00:10:15.727 ' 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:15.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.727 --rc genhtml_branch_coverage=1 00:10:15.727 --rc genhtml_function_coverage=1 00:10:15.727 --rc genhtml_legend=1 00:10:15.727 --rc geninfo_all_blocks=1 00:10:15.727 --rc geninfo_unexecuted_blocks=1 00:10:15.727 00:10:15.727 ' 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:15.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.727 --rc genhtml_branch_coverage=1 00:10:15.727 --rc genhtml_function_coverage=1 00:10:15.727 --rc genhtml_legend=1 00:10:15.727 --rc geninfo_all_blocks=1 00:10:15.727 --rc geninfo_unexecuted_blocks=1 00:10:15.727 00:10:15.727 ' 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.727 12:48:41 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.727 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:15.728 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:15.728 ************************************ 00:10:15.728 START TEST nvmf_example 00:10:15.728 ************************************ 00:10:15.728 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:10:15.988 * Looking for test storage... 00:10:15.988 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:15.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.988 --rc genhtml_branch_coverage=1 00:10:15.988 --rc genhtml_function_coverage=1 00:10:15.988 --rc genhtml_legend=1 00:10:15.988 --rc geninfo_all_blocks=1 00:10:15.988 --rc geninfo_unexecuted_blocks=1 00:10:15.988 00:10:15.988 ' 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:15.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.988 --rc genhtml_branch_coverage=1 00:10:15.988 --rc genhtml_function_coverage=1 00:10:15.988 --rc genhtml_legend=1 00:10:15.988 --rc geninfo_all_blocks=1 00:10:15.988 --rc geninfo_unexecuted_blocks=1 00:10:15.988 00:10:15.988 ' 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:15.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.988 --rc genhtml_branch_coverage=1 00:10:15.988 --rc genhtml_function_coverage=1 00:10:15.988 --rc genhtml_legend=1 00:10:15.988 --rc geninfo_all_blocks=1 00:10:15.988 --rc geninfo_unexecuted_blocks=1 00:10:15.988 00:10:15.988 ' 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:15.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.988 --rc genhtml_branch_coverage=1 00:10:15.988 --rc genhtml_function_coverage=1 00:10:15.988 --rc genhtml_legend=1 00:10:15.988 --rc geninfo_all_blocks=1 00:10:15.988 --rc geninfo_unexecuted_blocks=1 00:10:15.988 00:10:15.988 ' 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.988 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:15.989 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:15.989 12:48:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:24.200 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:24.200 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:24.200 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:24.201 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:24.201 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:24.201 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:24.201 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:24.201 altname enp217s0f0np0 00:10:24.201 altname ens818f0np0 00:10:24.201 inet 192.168.100.8/24 scope global mlx_0_0 00:10:24.201 valid_lft forever preferred_lft forever 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:24.201 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:24.201 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:24.201 altname enp217s0f1np1 00:10:24.201 altname ens818f1np1 00:10:24.201 inet 192.168.100.9/24 scope global mlx_0_1 00:10:24.201 valid_lft forever preferred_lft forever 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:24.201 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:24.461 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:24.462 192.168.100.9' 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:24.462 192.168.100.9' 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:24.462 192.168.100.9' 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=4070731 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 4070731 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 4070731 ']' 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.462 12:48:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.400 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.400 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:25.400 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:25.400 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:25.400 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.400 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:25.400 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.400 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:25.659 12:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:37.908 Initializing NVMe Controllers 00:10:37.908 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:37.908 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:37.908 Initialization complete. Launching workers. 00:10:37.908 ======================================================== 00:10:37.908 Latency(us) 00:10:37.908 Device Information : IOPS MiB/s Average min max 00:10:37.908 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 25682.22 100.32 2492.12 637.36 15988.14 00:10:37.908 ======================================================== 00:10:37.908 Total : 25682.22 100.32 2492.12 637.36 15988.14 00:10:37.908 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:37.908 rmmod nvme_rdma 00:10:37.908 rmmod nvme_fabrics 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 4070731 ']' 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 4070731 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 4070731 ']' 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 4070731 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4070731 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4070731' 00:10:37.908 killing process with pid 4070731 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 4070731 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 4070731 00:10:37.908 nvmf threads initialize successfully 00:10:37.908 bdev subsystem init successfully 00:10:37.908 created a nvmf target service 00:10:37.908 create targets's poll groups done 00:10:37.908 all subsystems of target started 00:10:37.908 nvmf target is running 00:10:37.908 all subsystems of target stopped 00:10:37.908 destroy targets's poll groups done 00:10:37.908 destroyed the nvmf target service 00:10:37.908 bdev subsystem finish successfully 00:10:37.908 nvmf threads destroy successfully 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:37.908 00:10:37.908 real 0m21.450s 00:10:37.908 user 0m52.893s 00:10:37.908 sys 0m6.922s 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:37.908 ************************************ 00:10:37.908 END TEST nvmf_example 00:10:37.908 ************************************ 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:37.908 ************************************ 00:10:37.908 START TEST nvmf_filesystem 00:10:37.908 ************************************ 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:10:37.908 * Looking for test storage... 00:10:37.908 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:37.908 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:37.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.909 --rc genhtml_branch_coverage=1 00:10:37.909 --rc genhtml_function_coverage=1 00:10:37.909 --rc genhtml_legend=1 00:10:37.909 --rc geninfo_all_blocks=1 00:10:37.909 --rc geninfo_unexecuted_blocks=1 00:10:37.909 00:10:37.909 ' 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:37.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.909 --rc genhtml_branch_coverage=1 00:10:37.909 --rc genhtml_function_coverage=1 00:10:37.909 --rc genhtml_legend=1 00:10:37.909 --rc geninfo_all_blocks=1 00:10:37.909 --rc geninfo_unexecuted_blocks=1 00:10:37.909 00:10:37.909 ' 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:37.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.909 --rc genhtml_branch_coverage=1 00:10:37.909 --rc genhtml_function_coverage=1 00:10:37.909 --rc genhtml_legend=1 00:10:37.909 --rc geninfo_all_blocks=1 00:10:37.909 --rc geninfo_unexecuted_blocks=1 00:10:37.909 00:10:37.909 ' 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:37.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.909 --rc genhtml_branch_coverage=1 00:10:37.909 --rc genhtml_function_coverage=1 00:10:37.909 --rc genhtml_legend=1 00:10:37.909 --rc geninfo_all_blocks=1 00:10:37.909 --rc geninfo_unexecuted_blocks=1 00:10:37.909 00:10:37.909 ' 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:37.909 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:37.910 #define SPDK_CONFIG_H 00:10:37.910 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:37.910 #define SPDK_CONFIG_APPS 1 00:10:37.910 #define SPDK_CONFIG_ARCH native 00:10:37.910 #undef SPDK_CONFIG_ASAN 00:10:37.910 #undef SPDK_CONFIG_AVAHI 00:10:37.910 #undef SPDK_CONFIG_CET 00:10:37.910 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:37.910 #define SPDK_CONFIG_COVERAGE 1 00:10:37.910 #define SPDK_CONFIG_CROSS_PREFIX 00:10:37.910 #undef SPDK_CONFIG_CRYPTO 00:10:37.910 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:37.910 #undef SPDK_CONFIG_CUSTOMOCF 00:10:37.910 #undef SPDK_CONFIG_DAOS 00:10:37.910 #define SPDK_CONFIG_DAOS_DIR 00:10:37.910 #define SPDK_CONFIG_DEBUG 1 00:10:37.910 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:37.910 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:10:37.910 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:37.910 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:37.910 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:37.910 #undef SPDK_CONFIG_DPDK_UADK 00:10:37.910 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:10:37.910 #define SPDK_CONFIG_EXAMPLES 1 00:10:37.910 #undef SPDK_CONFIG_FC 00:10:37.910 #define SPDK_CONFIG_FC_PATH 00:10:37.910 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:37.910 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:37.910 #define SPDK_CONFIG_FSDEV 1 00:10:37.910 #undef SPDK_CONFIG_FUSE 00:10:37.910 #undef SPDK_CONFIG_FUZZER 00:10:37.910 #define SPDK_CONFIG_FUZZER_LIB 00:10:37.910 #undef SPDK_CONFIG_GOLANG 00:10:37.910 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:37.910 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:37.910 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:37.910 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:37.910 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:37.910 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:37.910 #undef SPDK_CONFIG_HAVE_LZ4 00:10:37.910 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:37.910 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:37.910 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:37.910 #define SPDK_CONFIG_IDXD 1 00:10:37.910 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:37.910 #undef SPDK_CONFIG_IPSEC_MB 00:10:37.910 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:37.910 #define SPDK_CONFIG_ISAL 1 00:10:37.910 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:37.910 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:37.910 #define SPDK_CONFIG_LIBDIR 00:10:37.910 #undef SPDK_CONFIG_LTO 00:10:37.910 #define SPDK_CONFIG_MAX_LCORES 128 00:10:37.910 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:37.910 #define SPDK_CONFIG_NVME_CUSE 1 00:10:37.910 #undef SPDK_CONFIG_OCF 00:10:37.910 #define SPDK_CONFIG_OCF_PATH 00:10:37.910 #define SPDK_CONFIG_OPENSSL_PATH 00:10:37.910 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:37.910 #define SPDK_CONFIG_PGO_DIR 00:10:37.910 #undef SPDK_CONFIG_PGO_USE 00:10:37.910 #define SPDK_CONFIG_PREFIX /usr/local 00:10:37.910 #undef SPDK_CONFIG_RAID5F 00:10:37.910 #undef SPDK_CONFIG_RBD 00:10:37.910 #define SPDK_CONFIG_RDMA 1 00:10:37.910 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:37.910 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:37.910 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:37.910 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:37.910 #define SPDK_CONFIG_SHARED 1 00:10:37.910 #undef SPDK_CONFIG_SMA 00:10:37.910 #define SPDK_CONFIG_TESTS 1 00:10:37.910 #undef SPDK_CONFIG_TSAN 00:10:37.910 #define SPDK_CONFIG_UBLK 1 00:10:37.910 #define SPDK_CONFIG_UBSAN 1 00:10:37.910 #undef SPDK_CONFIG_UNIT_TESTS 00:10:37.910 #undef SPDK_CONFIG_URING 00:10:37.910 #define SPDK_CONFIG_URING_PATH 00:10:37.910 #undef SPDK_CONFIG_URING_ZNS 00:10:37.910 #undef SPDK_CONFIG_USDT 00:10:37.910 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:37.910 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:37.910 #undef SPDK_CONFIG_VFIO_USER 00:10:37.910 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:37.910 #define SPDK_CONFIG_VHOST 1 00:10:37.910 #define SPDK_CONFIG_VIRTIO 1 00:10:37.910 #undef SPDK_CONFIG_VTUNE 00:10:37.910 #define SPDK_CONFIG_VTUNE_DIR 00:10:37.910 #define SPDK_CONFIG_WERROR 1 00:10:37.910 #define SPDK_CONFIG_WPDK_DIR 00:10:37.910 #undef SPDK_CONFIG_XNVME 00:10:37.910 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.910 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:37.911 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:37.912 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 4072950 ]] 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 4072950 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.0zVpYU 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.0zVpYU/tests/target /tmp/spdk.0zVpYU 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=54908227584 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730586624 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6822359040 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:37.913 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30803623936 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865293312 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=61669376 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12322701312 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346118144 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23416832 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30864084992 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865293312 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1208320 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:37.914 * Looking for test storage... 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=54908227584 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9036951552 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:37.914 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:37.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.914 --rc genhtml_branch_coverage=1 00:10:37.914 --rc genhtml_function_coverage=1 00:10:37.914 --rc genhtml_legend=1 00:10:37.914 --rc geninfo_all_blocks=1 00:10:37.914 --rc geninfo_unexecuted_blocks=1 00:10:37.914 00:10:37.914 ' 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:37.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.914 --rc genhtml_branch_coverage=1 00:10:37.914 --rc genhtml_function_coverage=1 00:10:37.914 --rc genhtml_legend=1 00:10:37.914 --rc geninfo_all_blocks=1 00:10:37.914 --rc geninfo_unexecuted_blocks=1 00:10:37.914 00:10:37.914 ' 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:37.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.914 --rc genhtml_branch_coverage=1 00:10:37.914 --rc genhtml_function_coverage=1 00:10:37.914 --rc genhtml_legend=1 00:10:37.914 --rc geninfo_all_blocks=1 00:10:37.914 --rc geninfo_unexecuted_blocks=1 00:10:37.914 00:10:37.914 ' 00:10:37.914 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:37.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.914 --rc genhtml_branch_coverage=1 00:10:37.914 --rc genhtml_function_coverage=1 00:10:37.914 --rc genhtml_legend=1 00:10:37.914 --rc geninfo_all_blocks=1 00:10:37.914 --rc geninfo_unexecuted_blocks=1 00:10:37.914 00:10:37.915 ' 00:10:37.915 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.915 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:37.915 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.915 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.915 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.915 12:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.915 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.915 12:49:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:46.054 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:46.054 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:46.054 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:46.055 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:46.055 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:46.055 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:46.055 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:46.055 altname enp217s0f0np0 00:10:46.055 altname ens818f0np0 00:10:46.055 inet 192.168.100.8/24 scope global mlx_0_0 00:10:46.055 valid_lft forever preferred_lft forever 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:46.055 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:46.055 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:46.055 altname enp217s0f1np1 00:10:46.055 altname ens818f1np1 00:10:46.055 inet 192.168.100.9/24 scope global mlx_0_1 00:10:46.055 valid_lft forever preferred_lft forever 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:46.055 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:46.056 192.168.100.9' 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:46.056 192.168.100.9' 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:46.056 192.168.100.9' 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.056 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:46.315 ************************************ 00:10:46.315 START TEST nvmf_filesystem_no_in_capsule 00:10:46.315 ************************************ 00:10:46.315 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:46.315 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:46.315 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:46.315 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.315 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.315 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.315 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=4077104 00:10:46.315 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 4077104 00:10:46.315 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.315 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 4077104 ']' 00:10:46.315 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.315 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.315 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.315 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.315 12:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.315 [2024-11-27 12:49:12.525000] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:10:46.315 [2024-11-27 12:49:12.525050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.315 [2024-11-27 12:49:12.616989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.315 [2024-11-27 12:49:12.655831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.315 [2024-11-27 12:49:12.655878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.315 [2024-11-27 12:49:12.655887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.315 [2024-11-27 12:49:12.655895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.315 [2024-11-27 12:49:12.655902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.315 [2024-11-27 12:49:12.657475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.315 [2024-11-27 12:49:12.657573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.315 [2024-11-27 12:49:12.657635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.315 [2024-11-27 12:49:12.657653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.252 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.252 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:47.252 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.252 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.252 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.252 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.252 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:47.252 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:10:47.252 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.252 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.252 [2024-11-27 12:49:13.421247] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:10:47.252 [2024-11-27 12:49:13.443195] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e6adf0/0x1e6f2e0) succeed. 00:10:47.252 [2024-11-27 12:49:13.452501] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e6c480/0x1eb0980) succeed. 00:10:47.252 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.252 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:47.252 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.252 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.510 Malloc1 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.510 [2024-11-27 12:49:13.706456] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.510 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:47.510 { 00:10:47.510 "name": "Malloc1", 00:10:47.510 "aliases": [ 00:10:47.510 "327ed44d-d2b5-4d2e-a605-788bd1561fae" 00:10:47.510 ], 00:10:47.510 "product_name": "Malloc disk", 00:10:47.510 "block_size": 512, 00:10:47.510 "num_blocks": 1048576, 00:10:47.510 "uuid": "327ed44d-d2b5-4d2e-a605-788bd1561fae", 00:10:47.510 "assigned_rate_limits": { 00:10:47.511 "rw_ios_per_sec": 0, 00:10:47.511 "rw_mbytes_per_sec": 0, 00:10:47.511 "r_mbytes_per_sec": 0, 00:10:47.511 "w_mbytes_per_sec": 0 00:10:47.511 }, 00:10:47.511 "claimed": true, 00:10:47.511 "claim_type": "exclusive_write", 00:10:47.511 "zoned": false, 00:10:47.511 "supported_io_types": { 00:10:47.511 "read": true, 00:10:47.511 "write": true, 00:10:47.511 "unmap": true, 00:10:47.511 "flush": true, 00:10:47.511 "reset": true, 00:10:47.511 "nvme_admin": false, 00:10:47.511 "nvme_io": false, 00:10:47.511 "nvme_io_md": false, 00:10:47.511 "write_zeroes": true, 00:10:47.511 "zcopy": true, 00:10:47.511 "get_zone_info": false, 00:10:47.511 "zone_management": false, 00:10:47.511 "zone_append": false, 00:10:47.511 "compare": false, 00:10:47.511 "compare_and_write": false, 00:10:47.511 "abort": true, 00:10:47.511 "seek_hole": false, 00:10:47.511 "seek_data": false, 00:10:47.511 "copy": true, 00:10:47.511 "nvme_iov_md": false 00:10:47.511 }, 00:10:47.511 "memory_domains": [ 00:10:47.511 { 00:10:47.511 "dma_device_id": "system", 00:10:47.511 "dma_device_type": 1 00:10:47.511 }, 00:10:47.511 { 00:10:47.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.511 "dma_device_type": 2 00:10:47.511 } 00:10:47.511 ], 00:10:47.511 "driver_specific": {} 00:10:47.511 } 00:10:47.511 ]' 00:10:47.511 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:47.511 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:47.511 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:47.511 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:47.511 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:47.511 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:47.511 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:47.511 12:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:48.448 12:49:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:48.448 12:49:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:48.448 12:49:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.448 12:49:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:48.448 12:49:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:50.983 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:50.983 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.983 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:50.983 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:50.983 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.983 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:50.983 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:50.983 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:50.983 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:50.983 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:50.983 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:50.983 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:50.983 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:50.984 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:50.984 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:50.984 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:50.984 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:50.984 12:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:50.984 12:49:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.921 ************************************ 00:10:51.921 START TEST filesystem_ext4 00:10:51.921 ************************************ 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:51.921 mke2fs 1.47.0 (5-Feb-2023) 00:10:51.921 Discarding device blocks: 0/522240 done 00:10:51.921 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:51.921 Filesystem UUID: 6d533b49-e900-4af1-a773-3a076c8f446e 00:10:51.921 Superblock backups stored on blocks: 00:10:51.921 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:51.921 00:10:51.921 Allocating group tables: 0/64 done 00:10:51.921 Writing inode tables: 0/64 done 00:10:51.921 Creating journal (8192 blocks): done 00:10:51.921 Writing superblocks and filesystem accounting information: 0/64 done 00:10:51.921 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:51.921 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 4077104 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.181 00:10:52.181 real 0m0.204s 00:10:52.181 user 0m0.032s 00:10:52.181 sys 0m0.078s 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:52.181 ************************************ 00:10:52.181 END TEST filesystem_ext4 00:10:52.181 ************************************ 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.181 ************************************ 00:10:52.181 START TEST filesystem_btrfs 00:10:52.181 ************************************ 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:52.181 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:52.441 btrfs-progs v6.8.1 00:10:52.441 See https://btrfs.readthedocs.io for more information. 00:10:52.441 00:10:52.441 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:52.441 NOTE: several default settings have changed in version 5.15, please make sure 00:10:52.441 this does not affect your deployments: 00:10:52.441 - DUP for metadata (-m dup) 00:10:52.441 - enabled no-holes (-O no-holes) 00:10:52.441 - enabled free-space-tree (-R free-space-tree) 00:10:52.441 00:10:52.441 Label: (null) 00:10:52.441 UUID: b166d5b0-05e0-4cfd-8338-030325298e6d 00:10:52.441 Node size: 16384 00:10:52.441 Sector size: 4096 (CPU page size: 4096) 00:10:52.441 Filesystem size: 510.00MiB 00:10:52.441 Block group profiles: 00:10:52.441 Data: single 8.00MiB 00:10:52.441 Metadata: DUP 32.00MiB 00:10:52.441 System: DUP 8.00MiB 00:10:52.441 SSD detected: yes 00:10:52.441 Zoned device: no 00:10:52.441 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:52.441 Checksum: crc32c 00:10:52.441 Number of devices: 1 00:10:52.441 Devices: 00:10:52.441 ID SIZE PATH 00:10:52.441 1 510.00MiB /dev/nvme0n1p1 00:10:52.441 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 4077104 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.441 00:10:52.441 real 0m0.255s 00:10:52.441 user 0m0.035s 00:10:52.441 sys 0m0.127s 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:52.441 ************************************ 00:10:52.441 END TEST filesystem_btrfs 00:10:52.441 ************************************ 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.441 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.701 ************************************ 00:10:52.701 START TEST filesystem_xfs 00:10:52.701 ************************************ 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:52.701 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:52.701 = sectsz=512 attr=2, projid32bit=1 00:10:52.701 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:52.701 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:52.701 data = bsize=4096 blocks=130560, imaxpct=25 00:10:52.701 = sunit=0 swidth=0 blks 00:10:52.701 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:52.701 log =internal log bsize=4096 blocks=16384, version=2 00:10:52.701 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:52.701 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:52.701 Discarding blocks...Done. 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:52.701 12:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.701 12:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 4077104 00:10:52.701 12:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.701 12:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.701 12:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.701 12:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.701 00:10:52.701 real 0m0.212s 00:10:52.701 user 0m0.038s 00:10:52.701 sys 0m0.075s 00:10:52.701 12:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.701 12:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:52.701 ************************************ 00:10:52.701 END TEST filesystem_xfs 00:10:52.701 ************************************ 00:10:52.701 12:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:52.960 12:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:52.960 12:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:53.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 4077104 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 4077104 ']' 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 4077104 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4077104 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4077104' 00:10:53.898 killing process with pid 4077104 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 4077104 00:10:53.898 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 4077104 00:10:54.157 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:54.157 00:10:54.157 real 0m8.074s 00:10:54.157 user 0m31.668s 00:10:54.157 sys 0m1.248s 00:10:54.157 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.157 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.157 ************************************ 00:10:54.157 END TEST nvmf_filesystem_no_in_capsule 00:10:54.157 ************************************ 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.416 ************************************ 00:10:54.416 START TEST nvmf_filesystem_in_capsule 00:10:54.416 ************************************ 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=4078659 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 4078659 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 4078659 ']' 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.416 12:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.416 [2024-11-27 12:49:20.662746] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:10:54.416 [2024-11-27 12:49:20.662790] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.416 [2024-11-27 12:49:20.751289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.416 [2024-11-27 12:49:20.791242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.416 [2024-11-27 12:49:20.791282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.416 [2024-11-27 12:49:20.791291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.416 [2024-11-27 12:49:20.791300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.416 [2024-11-27 12:49:20.791306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.416 [2024-11-27 12:49:20.792892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.416 [2024-11-27 12:49:20.792992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.416 [2024-11-27 12:49:20.793054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.416 [2024-11-27 12:49:20.793055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.353 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.353 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:55.353 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:55.353 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:55.353 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.353 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.353 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:55.353 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:10:55.353 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.353 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.353 [2024-11-27 12:49:21.583784] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1741df0/0x17462e0) succeed. 00:10:55.353 [2024-11-27 12:49:21.592988] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1743480/0x1787980) succeed. 00:10:55.353 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.353 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:55.353 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.353 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.613 Malloc1 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.613 [2024-11-27 12:49:21.871723] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:55.613 { 00:10:55.613 "name": "Malloc1", 00:10:55.613 "aliases": [ 00:10:55.613 "cad20d8a-f864-47f7-b819-31b658e87b08" 00:10:55.613 ], 00:10:55.613 "product_name": "Malloc disk", 00:10:55.613 "block_size": 512, 00:10:55.613 "num_blocks": 1048576, 00:10:55.613 "uuid": "cad20d8a-f864-47f7-b819-31b658e87b08", 00:10:55.613 "assigned_rate_limits": { 00:10:55.613 "rw_ios_per_sec": 0, 00:10:55.613 "rw_mbytes_per_sec": 0, 00:10:55.613 "r_mbytes_per_sec": 0, 00:10:55.613 "w_mbytes_per_sec": 0 00:10:55.613 }, 00:10:55.613 "claimed": true, 00:10:55.613 "claim_type": "exclusive_write", 00:10:55.613 "zoned": false, 00:10:55.613 "supported_io_types": { 00:10:55.613 "read": true, 00:10:55.613 "write": true, 00:10:55.613 "unmap": true, 00:10:55.613 "flush": true, 00:10:55.613 "reset": true, 00:10:55.613 "nvme_admin": false, 00:10:55.613 "nvme_io": false, 00:10:55.613 "nvme_io_md": false, 00:10:55.613 "write_zeroes": true, 00:10:55.613 "zcopy": true, 00:10:55.613 "get_zone_info": false, 00:10:55.613 "zone_management": false, 00:10:55.613 "zone_append": false, 00:10:55.613 "compare": false, 00:10:55.613 "compare_and_write": false, 00:10:55.613 "abort": true, 00:10:55.613 "seek_hole": false, 00:10:55.613 "seek_data": false, 00:10:55.613 "copy": true, 00:10:55.613 "nvme_iov_md": false 00:10:55.613 }, 00:10:55.613 "memory_domains": [ 00:10:55.613 { 00:10:55.613 "dma_device_id": "system", 00:10:55.613 "dma_device_type": 1 00:10:55.613 }, 00:10:55.613 { 00:10:55.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.613 "dma_device_type": 2 00:10:55.613 } 00:10:55.613 ], 00:10:55.613 "driver_specific": {} 00:10:55.613 } 00:10:55.613 ]' 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:55.613 12:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:56.991 12:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.991 12:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:56.991 12:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.991 12:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:56.991 12:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:58.895 12:49:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:58.895 12:49:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:58.895 12:49:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.895 12:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:58.895 12:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.895 12:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:58.895 12:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:58.895 12:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:58.895 12:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:58.895 12:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:58.895 12:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:58.895 12:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:58.895 12:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:58.895 12:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:58.895 12:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:58.895 12:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:58.895 12:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:58.895 12:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:58.895 12:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.274 ************************************ 00:11:00.274 START TEST filesystem_in_capsule_ext4 00:11:00.274 ************************************ 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:00.274 mke2fs 1.47.0 (5-Feb-2023) 00:11:00.274 Discarding device blocks: 0/522240 done 00:11:00.274 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:00.274 Filesystem UUID: c3c0a4d6-85c7-4e6f-a593-12aeb2388c08 00:11:00.274 Superblock backups stored on blocks: 00:11:00.274 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:00.274 00:11:00.274 Allocating group tables: 0/64 done 00:11:00.274 Writing inode tables: 0/64 done 00:11:00.274 Creating journal (8192 blocks): done 00:11:00.274 Writing superblocks and filesystem accounting information: 0/64 done 00:11:00.274 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 4078659 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:00.274 00:11:00.274 real 0m0.199s 00:11:00.274 user 0m0.031s 00:11:00.274 sys 0m0.078s 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:00.274 ************************************ 00:11:00.274 END TEST filesystem_in_capsule_ext4 00:11:00.274 ************************************ 00:11:00.274 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:00.275 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:00.275 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.275 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.275 ************************************ 00:11:00.275 START TEST filesystem_in_capsule_btrfs 00:11:00.275 ************************************ 00:11:00.275 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:00.275 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:00.275 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:00.275 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:00.275 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:00.275 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:00.275 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:00.275 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:00.275 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:00.275 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:00.275 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:00.535 btrfs-progs v6.8.1 00:11:00.535 See https://btrfs.readthedocs.io for more information. 00:11:00.535 00:11:00.535 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:00.535 NOTE: several default settings have changed in version 5.15, please make sure 00:11:00.535 this does not affect your deployments: 00:11:00.535 - DUP for metadata (-m dup) 00:11:00.535 - enabled no-holes (-O no-holes) 00:11:00.535 - enabled free-space-tree (-R free-space-tree) 00:11:00.535 00:11:00.535 Label: (null) 00:11:00.535 UUID: d257e104-bae7-4f46-9539-5dca75a219cb 00:11:00.535 Node size: 16384 00:11:00.535 Sector size: 4096 (CPU page size: 4096) 00:11:00.535 Filesystem size: 510.00MiB 00:11:00.535 Block group profiles: 00:11:00.535 Data: single 8.00MiB 00:11:00.535 Metadata: DUP 32.00MiB 00:11:00.535 System: DUP 8.00MiB 00:11:00.535 SSD detected: yes 00:11:00.535 Zoned device: no 00:11:00.535 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:00.535 Checksum: crc32c 00:11:00.535 Number of devices: 1 00:11:00.535 Devices: 00:11:00.535 ID SIZE PATH 00:11:00.535 1 510.00MiB /dev/nvme0n1p1 00:11:00.535 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 4078659 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:00.535 00:11:00.535 real 0m0.254s 00:11:00.535 user 0m0.034s 00:11:00.535 sys 0m0.129s 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:00.535 ************************************ 00:11:00.535 END TEST filesystem_in_capsule_btrfs 00:11:00.535 ************************************ 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.535 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.795 ************************************ 00:11:00.795 START TEST filesystem_in_capsule_xfs 00:11:00.795 ************************************ 00:11:00.795 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:00.795 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:00.795 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:00.795 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:00.795 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:00.795 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:00.795 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:00.795 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:00.795 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:00.795 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:00.795 12:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:00.795 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:00.795 = sectsz=512 attr=2, projid32bit=1 00:11:00.795 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:00.795 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:00.795 data = bsize=4096 blocks=130560, imaxpct=25 00:11:00.795 = sunit=0 swidth=0 blks 00:11:00.795 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:00.795 log =internal log bsize=4096 blocks=16384, version=2 00:11:00.795 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:00.795 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:00.795 Discarding blocks...Done. 00:11:00.795 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:00.795 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:00.795 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:00.795 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:00.795 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:00.795 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:00.795 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:00.795 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:00.795 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 4078659 00:11:00.795 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:00.795 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:00.795 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:00.795 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:00.795 00:11:00.795 real 0m0.208s 00:11:00.795 user 0m0.024s 00:11:00.795 sys 0m0.085s 00:11:00.795 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.795 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:00.795 ************************************ 00:11:00.795 END TEST filesystem_in_capsule_xfs 00:11:00.795 ************************************ 00:11:01.054 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:01.054 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:01.054 12:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 4078659 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 4078659 ']' 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 4078659 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4078659 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4078659' 00:11:01.991 killing process with pid 4078659 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 4078659 00:11:01.991 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 4078659 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:02.561 00:11:02.561 real 0m8.070s 00:11:02.561 user 0m31.606s 00:11:02.561 sys 0m1.233s 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.561 ************************************ 00:11:02.561 END TEST nvmf_filesystem_in_capsule 00:11:02.561 ************************************ 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:02.561 rmmod nvme_rdma 00:11:02.561 rmmod nvme_fabrics 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:02.561 00:11:02.561 real 0m25.200s 00:11:02.561 user 1m5.967s 00:11:02.561 sys 0m9.132s 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.561 ************************************ 00:11:02.561 END TEST nvmf_filesystem 00:11:02.561 ************************************ 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:02.561 ************************************ 00:11:02.561 START TEST nvmf_target_discovery 00:11:02.561 ************************************ 00:11:02.561 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:11:02.561 * Looking for test storage... 00:11:02.821 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:02.821 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:02.821 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:02.821 12:49:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:02.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.821 --rc genhtml_branch_coverage=1 00:11:02.821 --rc genhtml_function_coverage=1 00:11:02.821 --rc genhtml_legend=1 00:11:02.821 --rc geninfo_all_blocks=1 00:11:02.821 --rc geninfo_unexecuted_blocks=1 00:11:02.821 00:11:02.821 ' 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:02.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.821 --rc genhtml_branch_coverage=1 00:11:02.821 --rc genhtml_function_coverage=1 00:11:02.821 --rc genhtml_legend=1 00:11:02.821 --rc geninfo_all_blocks=1 00:11:02.821 --rc geninfo_unexecuted_blocks=1 00:11:02.821 00:11:02.821 ' 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:02.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.821 --rc genhtml_branch_coverage=1 00:11:02.821 --rc genhtml_function_coverage=1 00:11:02.821 --rc genhtml_legend=1 00:11:02.821 --rc geninfo_all_blocks=1 00:11:02.821 --rc geninfo_unexecuted_blocks=1 00:11:02.821 00:11:02.821 ' 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:02.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.821 --rc genhtml_branch_coverage=1 00:11:02.821 --rc genhtml_function_coverage=1 00:11:02.821 --rc genhtml_legend=1 00:11:02.821 --rc geninfo_all_blocks=1 00:11:02.821 --rc geninfo_unexecuted_blocks=1 00:11:02.821 00:11:02.821 ' 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.821 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.822 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:02.822 12:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.947 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:10.948 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:10.948 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:10.948 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:10.948 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:10.948 12:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:10.948 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:10.948 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:10.948 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:10.949 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:10.949 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:10.949 altname enp217s0f0np0 00:11:10.949 altname ens818f0np0 00:11:10.949 inet 192.168.100.8/24 scope global mlx_0_0 00:11:10.949 valid_lft forever preferred_lft forever 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:10.949 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:10.949 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:10.949 altname enp217s0f1np1 00:11:10.949 altname ens818f1np1 00:11:10.949 inet 192.168.100.9/24 scope global mlx_0_1 00:11:10.949 valid_lft forever preferred_lft forever 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:10.949 192.168.100.9' 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:10.949 192.168.100.9' 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:10.949 192.168.100.9' 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=4084363 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 4084363 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 4084363 ']' 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:10.949 12:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:10.950 [2024-11-27 12:49:37.213474] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:11:10.950 [2024-11-27 12:49:37.213524] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.950 [2024-11-27 12:49:37.303195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.209 [2024-11-27 12:49:37.344772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.209 [2024-11-27 12:49:37.344810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.209 [2024-11-27 12:49:37.344820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.209 [2024-11-27 12:49:37.344828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.209 [2024-11-27 12:49:37.344834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.209 [2024-11-27 12:49:37.346592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.209 [2024-11-27 12:49:37.346692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.209 [2024-11-27 12:49:37.346712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.209 [2024-11-27 12:49:37.346714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.777 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.777 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:11.777 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:11.777 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:11.777 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.777 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.777 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:11.777 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.777 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.777 [2024-11-27 12:49:38.128808] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf78df0/0xf7d2e0) succeed. 00:11:11.777 [2024-11-27 12:49:38.138268] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf7a480/0xfbe980) succeed. 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.037 Null1 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.037 [2024-11-27 12:49:38.318980] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.037 Null2 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.037 Null3 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.037 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.038 Null4 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.038 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.298 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:11:12.298 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.298 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.298 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.298 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:12.298 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.298 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.298 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.298 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:11:12.298 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.298 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.298 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.298 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:11:12.298 00:11:12.298 Discovery Log Number of Records 6, Generation counter 6 00:11:12.298 =====Discovery Log Entry 0====== 00:11:12.298 trtype: rdma 00:11:12.298 adrfam: ipv4 00:11:12.298 subtype: current discovery subsystem 00:11:12.298 treq: not required 00:11:12.298 portid: 0 00:11:12.298 trsvcid: 4420 00:11:12.298 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:12.298 traddr: 192.168.100.8 00:11:12.298 eflags: explicit discovery connections, duplicate discovery information 00:11:12.298 rdma_prtype: not specified 00:11:12.298 rdma_qptype: connected 00:11:12.298 rdma_cms: rdma-cm 00:11:12.298 rdma_pkey: 0x0000 00:11:12.298 =====Discovery Log Entry 1====== 00:11:12.298 trtype: rdma 00:11:12.298 adrfam: ipv4 00:11:12.298 subtype: nvme subsystem 00:11:12.298 treq: not required 00:11:12.298 portid: 0 00:11:12.298 trsvcid: 4420 00:11:12.298 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:12.298 traddr: 192.168.100.8 00:11:12.298 eflags: none 00:11:12.298 rdma_prtype: not specified 00:11:12.298 rdma_qptype: connected 00:11:12.298 rdma_cms: rdma-cm 00:11:12.298 rdma_pkey: 0x0000 00:11:12.298 =====Discovery Log Entry 2====== 00:11:12.298 trtype: rdma 00:11:12.298 adrfam: ipv4 00:11:12.298 subtype: nvme subsystem 00:11:12.298 treq: not required 00:11:12.298 portid: 0 00:11:12.298 trsvcid: 4420 00:11:12.298 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:12.298 traddr: 192.168.100.8 00:11:12.298 eflags: none 00:11:12.298 rdma_prtype: not specified 00:11:12.298 rdma_qptype: connected 00:11:12.298 rdma_cms: rdma-cm 00:11:12.298 rdma_pkey: 0x0000 00:11:12.298 =====Discovery Log Entry 3====== 00:11:12.298 trtype: rdma 00:11:12.298 adrfam: ipv4 00:11:12.298 subtype: nvme subsystem 00:11:12.298 treq: not required 00:11:12.298 portid: 0 00:11:12.298 trsvcid: 4420 00:11:12.298 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:12.298 traddr: 192.168.100.8 00:11:12.298 eflags: none 00:11:12.298 rdma_prtype: not specified 00:11:12.298 rdma_qptype: connected 00:11:12.298 rdma_cms: rdma-cm 00:11:12.298 rdma_pkey: 0x0000 00:11:12.298 =====Discovery Log Entry 4====== 00:11:12.298 trtype: rdma 00:11:12.298 adrfam: ipv4 00:11:12.298 subtype: nvme subsystem 00:11:12.298 treq: not required 00:11:12.299 portid: 0 00:11:12.299 trsvcid: 4420 00:11:12.299 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:12.299 traddr: 192.168.100.8 00:11:12.299 eflags: none 00:11:12.299 rdma_prtype: not specified 00:11:12.299 rdma_qptype: connected 00:11:12.299 rdma_cms: rdma-cm 00:11:12.299 rdma_pkey: 0x0000 00:11:12.299 =====Discovery Log Entry 5====== 00:11:12.299 trtype: rdma 00:11:12.299 adrfam: ipv4 00:11:12.299 subtype: discovery subsystem referral 00:11:12.299 treq: not required 00:11:12.299 portid: 0 00:11:12.299 trsvcid: 4430 00:11:12.299 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:12.299 traddr: 192.168.100.8 00:11:12.299 eflags: none 00:11:12.299 rdma_prtype: unrecognized 00:11:12.299 rdma_qptype: unrecognized 00:11:12.299 rdma_cms: unrecognized 00:11:12.299 rdma_pkey: 0x0000 00:11:12.299 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:12.299 Perform nvmf subsystem discovery via RPC 00:11:12.299 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:12.299 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.299 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.299 [ 00:11:12.299 { 00:11:12.299 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:12.299 "subtype": "Discovery", 00:11:12.299 "listen_addresses": [ 00:11:12.299 { 00:11:12.299 "trtype": "RDMA", 00:11:12.299 "adrfam": "IPv4", 00:11:12.299 "traddr": "192.168.100.8", 00:11:12.299 "trsvcid": "4420" 00:11:12.299 } 00:11:12.299 ], 00:11:12.299 "allow_any_host": true, 00:11:12.299 "hosts": [] 00:11:12.299 }, 00:11:12.299 { 00:11:12.299 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:12.299 "subtype": "NVMe", 00:11:12.299 "listen_addresses": [ 00:11:12.299 { 00:11:12.299 "trtype": "RDMA", 00:11:12.299 "adrfam": "IPv4", 00:11:12.299 "traddr": "192.168.100.8", 00:11:12.299 "trsvcid": "4420" 00:11:12.299 } 00:11:12.299 ], 00:11:12.299 "allow_any_host": true, 00:11:12.299 "hosts": [], 00:11:12.299 "serial_number": "SPDK00000000000001", 00:11:12.299 "model_number": "SPDK bdev Controller", 00:11:12.299 "max_namespaces": 32, 00:11:12.299 "min_cntlid": 1, 00:11:12.299 "max_cntlid": 65519, 00:11:12.299 "namespaces": [ 00:11:12.299 { 00:11:12.299 "nsid": 1, 00:11:12.299 "bdev_name": "Null1", 00:11:12.299 "name": "Null1", 00:11:12.299 "nguid": "0CD6D35E43224326BAFECE9288657058", 00:11:12.299 "uuid": "0cd6d35e-4322-4326-bafe-ce9288657058" 00:11:12.299 } 00:11:12.299 ] 00:11:12.299 }, 00:11:12.299 { 00:11:12.299 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:12.299 "subtype": "NVMe", 00:11:12.299 "listen_addresses": [ 00:11:12.299 { 00:11:12.299 "trtype": "RDMA", 00:11:12.299 "adrfam": "IPv4", 00:11:12.299 "traddr": "192.168.100.8", 00:11:12.299 "trsvcid": "4420" 00:11:12.299 } 00:11:12.299 ], 00:11:12.299 "allow_any_host": true, 00:11:12.299 "hosts": [], 00:11:12.299 "serial_number": "SPDK00000000000002", 00:11:12.299 "model_number": "SPDK bdev Controller", 00:11:12.299 "max_namespaces": 32, 00:11:12.299 "min_cntlid": 1, 00:11:12.299 "max_cntlid": 65519, 00:11:12.299 "namespaces": [ 00:11:12.299 { 00:11:12.299 "nsid": 1, 00:11:12.299 "bdev_name": "Null2", 00:11:12.299 "name": "Null2", 00:11:12.299 "nguid": "F916F90FAB1C4468AA54B24A8E86AF60", 00:11:12.299 "uuid": "f916f90f-ab1c-4468-aa54-b24a8e86af60" 00:11:12.299 } 00:11:12.299 ] 00:11:12.299 }, 00:11:12.299 { 00:11:12.299 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:12.299 "subtype": "NVMe", 00:11:12.299 "listen_addresses": [ 00:11:12.299 { 00:11:12.299 "trtype": "RDMA", 00:11:12.299 "adrfam": "IPv4", 00:11:12.299 "traddr": "192.168.100.8", 00:11:12.299 "trsvcid": "4420" 00:11:12.299 } 00:11:12.299 ], 00:11:12.299 "allow_any_host": true, 00:11:12.299 "hosts": [], 00:11:12.299 "serial_number": "SPDK00000000000003", 00:11:12.299 "model_number": "SPDK bdev Controller", 00:11:12.299 "max_namespaces": 32, 00:11:12.299 "min_cntlid": 1, 00:11:12.299 "max_cntlid": 65519, 00:11:12.299 "namespaces": [ 00:11:12.299 { 00:11:12.299 "nsid": 1, 00:11:12.299 "bdev_name": "Null3", 00:11:12.299 "name": "Null3", 00:11:12.299 "nguid": "686F40B1E2004056AB6A7C32AE58FC03", 00:11:12.299 "uuid": "686f40b1-e200-4056-ab6a-7c32ae58fc03" 00:11:12.299 } 00:11:12.299 ] 00:11:12.299 }, 00:11:12.299 { 00:11:12.299 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:12.299 "subtype": "NVMe", 00:11:12.299 "listen_addresses": [ 00:11:12.299 { 00:11:12.299 "trtype": "RDMA", 00:11:12.299 "adrfam": "IPv4", 00:11:12.299 "traddr": "192.168.100.8", 00:11:12.299 "trsvcid": "4420" 00:11:12.299 } 00:11:12.299 ], 00:11:12.299 "allow_any_host": true, 00:11:12.299 "hosts": [], 00:11:12.299 "serial_number": "SPDK00000000000004", 00:11:12.299 "model_number": "SPDK bdev Controller", 00:11:12.299 "max_namespaces": 32, 00:11:12.299 "min_cntlid": 1, 00:11:12.299 "max_cntlid": 65519, 00:11:12.299 "namespaces": [ 00:11:12.299 { 00:11:12.299 "nsid": 1, 00:11:12.299 "bdev_name": "Null4", 00:11:12.299 "name": "Null4", 00:11:12.299 "nguid": "2892567A50ED4EB2B93C3444D3018701", 00:11:12.299 "uuid": "2892567a-50ed-4eb2-b93c-3444d3018701" 00:11:12.299 } 00:11:12.299 ] 00:11:12.299 } 00:11:12.299 ] 00:11:12.299 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.299 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.300 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.559 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:12.559 rmmod nvme_rdma 00:11:12.559 rmmod nvme_fabrics 00:11:12.560 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.560 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:12.560 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:12.560 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 4084363 ']' 00:11:12.560 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 4084363 00:11:12.560 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 4084363 ']' 00:11:12.560 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 4084363 00:11:12.560 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:12.560 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.560 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4084363 00:11:12.560 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.560 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.560 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4084363' 00:11:12.560 killing process with pid 4084363 00:11:12.560 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 4084363 00:11:12.560 12:49:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 4084363 00:11:12.819 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:12.819 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:12.819 00:11:12.819 real 0m10.238s 00:11:12.819 user 0m9.270s 00:11:12.819 sys 0m6.695s 00:11:12.819 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.819 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.819 ************************************ 00:11:12.819 END TEST nvmf_target_discovery 00:11:12.819 ************************************ 00:11:12.819 12:49:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:12.819 12:49:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:12.819 12:49:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.819 12:49:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:12.819 ************************************ 00:11:12.819 START TEST nvmf_referrals 00:11:12.819 ************************************ 00:11:12.819 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:13.079 * Looking for test storage... 00:11:13.079 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:13.079 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:13.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.080 --rc genhtml_branch_coverage=1 00:11:13.080 --rc genhtml_function_coverage=1 00:11:13.080 --rc genhtml_legend=1 00:11:13.080 --rc geninfo_all_blocks=1 00:11:13.080 --rc geninfo_unexecuted_blocks=1 00:11:13.080 00:11:13.080 ' 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:13.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.080 --rc genhtml_branch_coverage=1 00:11:13.080 --rc genhtml_function_coverage=1 00:11:13.080 --rc genhtml_legend=1 00:11:13.080 --rc geninfo_all_blocks=1 00:11:13.080 --rc geninfo_unexecuted_blocks=1 00:11:13.080 00:11:13.080 ' 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:13.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.080 --rc genhtml_branch_coverage=1 00:11:13.080 --rc genhtml_function_coverage=1 00:11:13.080 --rc genhtml_legend=1 00:11:13.080 --rc geninfo_all_blocks=1 00:11:13.080 --rc geninfo_unexecuted_blocks=1 00:11:13.080 00:11:13.080 ' 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:13.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.080 --rc genhtml_branch_coverage=1 00:11:13.080 --rc genhtml_function_coverage=1 00:11:13.080 --rc genhtml_legend=1 00:11:13.080 --rc geninfo_all_blocks=1 00:11:13.080 --rc geninfo_unexecuted_blocks=1 00:11:13.080 00:11:13.080 ' 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:13.080 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:13.080 12:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:21.207 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:21.207 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:21.208 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:21.208 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:21.208 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:21.208 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:21.468 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:21.468 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:21.468 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:21.468 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:21.468 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:21.468 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:21.468 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:21.468 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:21.468 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:21.468 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:21.468 altname enp217s0f0np0 00:11:21.468 altname ens818f0np0 00:11:21.468 inet 192.168.100.8/24 scope global mlx_0_0 00:11:21.468 valid_lft forever preferred_lft forever 00:11:21.468 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:21.468 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:21.468 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:21.468 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:21.468 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:21.468 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:21.468 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:21.468 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:21.469 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:21.469 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:21.469 altname enp217s0f1np1 00:11:21.469 altname ens818f1np1 00:11:21.469 inet 192.168.100.9/24 scope global mlx_0_1 00:11:21.469 valid_lft forever preferred_lft forever 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:21.469 192.168.100.9' 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:21.469 192.168.100.9' 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:21.469 192.168.100.9' 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=4088715 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 4088715 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 4088715 ']' 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.469 12:49:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.469 [2024-11-27 12:49:47.804863] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:11:21.469 [2024-11-27 12:49:47.804916] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.728 [2024-11-27 12:49:47.896088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.728 [2024-11-27 12:49:47.936966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.728 [2024-11-27 12:49:47.937019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.728 [2024-11-27 12:49:47.937029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.728 [2024-11-27 12:49:47.937038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.729 [2024-11-27 12:49:47.937045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.729 [2024-11-27 12:49:47.938656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.729 [2024-11-27 12:49:47.938750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.729 [2024-11-27 12:49:47.938762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.729 [2024-11-27 12:49:47.938765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.297 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.297 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:22.297 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:22.297 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:22.297 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.556 [2024-11-27 12:49:48.720012] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xce4df0/0xce92e0) succeed. 00:11:22.556 [2024-11-27 12:49:48.729139] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xce6480/0xd2a980) succeed. 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.556 [2024-11-27 12:49:48.863876] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.556 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:22.815 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:22.815 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:22.815 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:22.815 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.815 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.815 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:22.815 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:22.815 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.815 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:22.815 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:22.815 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:22.815 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:22.815 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:22.815 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:22.815 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:22.815 12:49:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:22.815 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:23.073 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:23.073 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:23.073 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:11:23.073 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.073 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.073 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.073 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:23.073 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.073 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.073 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.073 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:23.073 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:23.073 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:23.074 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:23.074 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.074 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.074 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:23.074 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.074 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:23.074 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:23.074 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:23.074 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:23.074 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:23.074 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:23.074 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:23.074 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:23.331 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:23.332 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.332 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.332 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:23.591 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.591 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:23.591 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:23.592 12:49:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.851 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:24.110 rmmod nvme_rdma 00:11:24.110 rmmod nvme_fabrics 00:11:24.110 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:24.110 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:24.110 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:24.110 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 4088715 ']' 00:11:24.110 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 4088715 00:11:24.110 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 4088715 ']' 00:11:24.110 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 4088715 00:11:24.110 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:24.110 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.110 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4088715 00:11:24.110 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.110 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.110 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4088715' 00:11:24.110 killing process with pid 4088715 00:11:24.110 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 4088715 00:11:24.110 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 4088715 00:11:24.369 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:24.369 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:24.369 00:11:24.369 real 0m11.421s 00:11:24.369 user 0m13.739s 00:11:24.369 sys 0m7.312s 00:11:24.369 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.369 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.369 ************************************ 00:11:24.369 END TEST nvmf_referrals 00:11:24.369 ************************************ 00:11:24.369 12:49:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:24.369 12:49:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.369 12:49:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.369 12:49:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:24.369 ************************************ 00:11:24.369 START TEST nvmf_connect_disconnect 00:11:24.369 ************************************ 00:11:24.369 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:24.628 * Looking for test storage... 00:11:24.628 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:24.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.628 --rc genhtml_branch_coverage=1 00:11:24.628 --rc genhtml_function_coverage=1 00:11:24.628 --rc genhtml_legend=1 00:11:24.628 --rc geninfo_all_blocks=1 00:11:24.628 --rc geninfo_unexecuted_blocks=1 00:11:24.628 00:11:24.628 ' 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:24.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.628 --rc genhtml_branch_coverage=1 00:11:24.628 --rc genhtml_function_coverage=1 00:11:24.628 --rc genhtml_legend=1 00:11:24.628 --rc geninfo_all_blocks=1 00:11:24.628 --rc geninfo_unexecuted_blocks=1 00:11:24.628 00:11:24.628 ' 00:11:24.628 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:24.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.628 --rc genhtml_branch_coverage=1 00:11:24.628 --rc genhtml_function_coverage=1 00:11:24.628 --rc genhtml_legend=1 00:11:24.628 --rc geninfo_all_blocks=1 00:11:24.628 --rc geninfo_unexecuted_blocks=1 00:11:24.628 00:11:24.628 ' 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:24.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.629 --rc genhtml_branch_coverage=1 00:11:24.629 --rc genhtml_function_coverage=1 00:11:24.629 --rc genhtml_legend=1 00:11:24.629 --rc geninfo_all_blocks=1 00:11:24.629 --rc geninfo_unexecuted_blocks=1 00:11:24.629 00:11:24.629 ' 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.629 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:24.629 12:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:32.776 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:32.776 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:32.776 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:32.776 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:11:32.776 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:32.777 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:32.777 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:32.777 altname enp217s0f0np0 00:11:32.777 altname ens818f0np0 00:11:32.777 inet 192.168.100.8/24 scope global mlx_0_0 00:11:32.777 valid_lft forever preferred_lft forever 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:32.777 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:32.777 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:32.777 altname enp217s0f1np1 00:11:32.777 altname ens818f1np1 00:11:32.777 inet 192.168.100.9/24 scope global mlx_0_1 00:11:32.777 valid_lft forever preferred_lft forever 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:32.777 192.168.100.9' 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:32.777 192.168.100.9' 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:32.777 192.168.100.9' 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:32.777 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:32.778 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=4093375 00:11:32.778 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 4093375 00:11:32.778 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 4093375 ']' 00:11:32.778 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.778 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.778 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.778 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.778 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:32.778 12:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.778 [2024-11-27 12:49:59.006024] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:11:32.778 [2024-11-27 12:49:59.006073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.778 [2024-11-27 12:49:59.094962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.778 [2024-11-27 12:49:59.135340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.778 [2024-11-27 12:49:59.135381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.778 [2024-11-27 12:49:59.135391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.778 [2024-11-27 12:49:59.135399] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.778 [2024-11-27 12:49:59.135406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.778 [2024-11-27 12:49:59.136996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.778 [2024-11-27 12:49:59.137088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.778 [2024-11-27 12:49:59.137174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.778 [2024-11-27 12:49:59.137176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.714 12:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.714 12:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:33.714 12:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.714 12:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.714 12:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:33.714 12:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.714 12:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:33.714 12:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.714 12:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:33.714 [2024-11-27 12:49:59.876784] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:33.714 [2024-11-27 12:49:59.898508] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xeb9df0/0xebe2e0) succeed. 00:11:33.714 [2024-11-27 12:49:59.907799] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xebb480/0xeff980) succeed. 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:33.714 [2024-11-27 12:50:00.054545] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:33.714 12:50:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:37.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:53.805 rmmod nvme_rdma 00:11:53.805 rmmod nvme_fabrics 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 4093375 ']' 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 4093375 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 4093375 ']' 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 4093375 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.805 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4093375 00:11:54.064 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.064 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.064 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4093375' 00:11:54.065 killing process with pid 4093375 00:11:54.065 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 4093375 00:11:54.065 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 4093375 00:11:54.065 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.065 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:54.324 00:11:54.324 real 0m29.786s 00:11:54.324 user 1m27.029s 00:11:54.324 sys 0m7.318s 00:11:54.324 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.324 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.324 ************************************ 00:11:54.324 END TEST nvmf_connect_disconnect 00:11:54.324 ************************************ 00:11:54.324 12:50:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:11:54.324 12:50:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:54.324 12:50:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.324 12:50:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:54.324 ************************************ 00:11:54.324 START TEST nvmf_multitarget 00:11:54.324 ************************************ 00:11:54.324 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:11:54.324 * Looking for test storage... 00:11:54.324 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:54.324 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:54.324 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:11:54.324 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:54.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.585 --rc genhtml_branch_coverage=1 00:11:54.585 --rc genhtml_function_coverage=1 00:11:54.585 --rc genhtml_legend=1 00:11:54.585 --rc geninfo_all_blocks=1 00:11:54.585 --rc geninfo_unexecuted_blocks=1 00:11:54.585 00:11:54.585 ' 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:54.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.585 --rc genhtml_branch_coverage=1 00:11:54.585 --rc genhtml_function_coverage=1 00:11:54.585 --rc genhtml_legend=1 00:11:54.585 --rc geninfo_all_blocks=1 00:11:54.585 --rc geninfo_unexecuted_blocks=1 00:11:54.585 00:11:54.585 ' 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:54.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.585 --rc genhtml_branch_coverage=1 00:11:54.585 --rc genhtml_function_coverage=1 00:11:54.585 --rc genhtml_legend=1 00:11:54.585 --rc geninfo_all_blocks=1 00:11:54.585 --rc geninfo_unexecuted_blocks=1 00:11:54.585 00:11:54.585 ' 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:54.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.585 --rc genhtml_branch_coverage=1 00:11:54.585 --rc genhtml_function_coverage=1 00:11:54.585 --rc genhtml_legend=1 00:11:54.585 --rc geninfo_all_blocks=1 00:11:54.585 --rc geninfo_unexecuted_blocks=1 00:11:54.585 00:11:54.585 ' 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.585 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:54.586 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:54.586 12:50:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:02.709 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:02.709 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:02.709 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.709 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:02.710 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:02.710 12:50:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:02.710 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:02.710 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:02.710 altname enp217s0f0np0 00:12:02.710 altname ens818f0np0 00:12:02.710 inet 192.168.100.8/24 scope global mlx_0_0 00:12:02.710 valid_lft forever preferred_lft forever 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:02.710 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:02.710 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:02.710 altname enp217s0f1np1 00:12:02.710 altname ens818f1np1 00:12:02.710 inet 192.168.100.9/24 scope global mlx_0_1 00:12:02.710 valid_lft forever preferred_lft forever 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:02.710 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:02.973 192.168.100.9' 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:02.973 192.168.100.9' 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:02.973 192.168.100.9' 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=4101123 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 4101123 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 4101123 ']' 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.973 12:50:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:02.973 [2024-11-27 12:50:29.220200] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:12:02.973 [2024-11-27 12:50:29.220261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.973 [2024-11-27 12:50:29.309357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.973 [2024-11-27 12:50:29.348519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.973 [2024-11-27 12:50:29.348562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.973 [2024-11-27 12:50:29.348571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.973 [2024-11-27 12:50:29.348580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.973 [2024-11-27 12:50:29.348587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.973 [2024-11-27 12:50:29.350251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.973 [2024-11-27 12:50:29.350346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.973 [2024-11-27 12:50:29.350408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.973 [2024-11-27 12:50:29.350410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.913 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.913 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:03.913 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:03.913 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:03.913 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:03.913 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.913 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:03.913 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:03.913 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:03.913 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:03.913 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:04.172 "nvmf_tgt_1" 00:12:04.172 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:04.172 "nvmf_tgt_2" 00:12:04.172 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:04.172 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:04.431 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:04.431 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:04.431 true 00:12:04.431 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:04.431 true 00:12:04.431 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:04.431 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:04.691 rmmod nvme_rdma 00:12:04.691 rmmod nvme_fabrics 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 4101123 ']' 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 4101123 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 4101123 ']' 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 4101123 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4101123 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4101123' 00:12:04.691 killing process with pid 4101123 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 4101123 00:12:04.691 12:50:30 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 4101123 00:12:04.950 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:04.950 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:04.950 00:12:04.950 real 0m10.611s 00:12:04.950 user 0m10.470s 00:12:04.950 sys 0m6.944s 00:12:04.950 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.950 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:04.951 ************************************ 00:12:04.951 END TEST nvmf_multitarget 00:12:04.951 ************************************ 00:12:04.951 12:50:31 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:12:04.951 12:50:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:04.951 12:50:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.951 12:50:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:04.951 ************************************ 00:12:04.951 START TEST nvmf_rpc 00:12:04.951 ************************************ 00:12:04.951 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:12:04.951 * Looking for test storage... 00:12:04.951 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:04.951 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:04.951 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:04.951 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:05.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.211 --rc genhtml_branch_coverage=1 00:12:05.211 --rc genhtml_function_coverage=1 00:12:05.211 --rc genhtml_legend=1 00:12:05.211 --rc geninfo_all_blocks=1 00:12:05.211 --rc geninfo_unexecuted_blocks=1 00:12:05.211 00:12:05.211 ' 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:05.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.211 --rc genhtml_branch_coverage=1 00:12:05.211 --rc genhtml_function_coverage=1 00:12:05.211 --rc genhtml_legend=1 00:12:05.211 --rc geninfo_all_blocks=1 00:12:05.211 --rc geninfo_unexecuted_blocks=1 00:12:05.211 00:12:05.211 ' 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:05.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.211 --rc genhtml_branch_coverage=1 00:12:05.211 --rc genhtml_function_coverage=1 00:12:05.211 --rc genhtml_legend=1 00:12:05.211 --rc geninfo_all_blocks=1 00:12:05.211 --rc geninfo_unexecuted_blocks=1 00:12:05.211 00:12:05.211 ' 00:12:05.211 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:05.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.211 --rc genhtml_branch_coverage=1 00:12:05.211 --rc genhtml_function_coverage=1 00:12:05.211 --rc genhtml_legend=1 00:12:05.212 --rc geninfo_all_blocks=1 00:12:05.212 --rc geninfo_unexecuted_blocks=1 00:12:05.212 00:12:05.212 ' 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.212 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.212 12:50:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:13.336 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:13.336 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:13.336 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:13.337 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:13.337 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:13.337 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:13.337 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:13.337 altname enp217s0f0np0 00:12:13.337 altname ens818f0np0 00:12:13.337 inet 192.168.100.8/24 scope global mlx_0_0 00:12:13.337 valid_lft forever preferred_lft forever 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:13.337 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:13.337 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:13.337 altname enp217s0f1np1 00:12:13.337 altname ens818f1np1 00:12:13.337 inet 192.168.100.9/24 scope global mlx_0_1 00:12:13.337 valid_lft forever preferred_lft forever 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:13.337 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:13.338 192.168.100.9' 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:13.338 192.168.100.9' 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:13.338 192.168.100.9' 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=4105539 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 4105539 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 4105539 ']' 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.338 12:50:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.338 [2024-11-27 12:50:39.497692] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:12:13.338 [2024-11-27 12:50:39.497750] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.338 [2024-11-27 12:50:39.588034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.338 [2024-11-27 12:50:39.628305] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.338 [2024-11-27 12:50:39.628343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.338 [2024-11-27 12:50:39.628352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.338 [2024-11-27 12:50:39.628363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.338 [2024-11-27 12:50:39.628370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.338 [2024-11-27 12:50:39.630045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.338 [2024-11-27 12:50:39.630065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.338 [2024-11-27 12:50:39.630159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.338 [2024-11-27 12:50:39.630161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:14.276 "tick_rate": 2500000000, 00:12:14.276 "poll_groups": [ 00:12:14.276 { 00:12:14.276 "name": "nvmf_tgt_poll_group_000", 00:12:14.276 "admin_qpairs": 0, 00:12:14.276 "io_qpairs": 0, 00:12:14.276 "current_admin_qpairs": 0, 00:12:14.276 "current_io_qpairs": 0, 00:12:14.276 "pending_bdev_io": 0, 00:12:14.276 "completed_nvme_io": 0, 00:12:14.276 "transports": [] 00:12:14.276 }, 00:12:14.276 { 00:12:14.276 "name": "nvmf_tgt_poll_group_001", 00:12:14.276 "admin_qpairs": 0, 00:12:14.276 "io_qpairs": 0, 00:12:14.276 "current_admin_qpairs": 0, 00:12:14.276 "current_io_qpairs": 0, 00:12:14.276 "pending_bdev_io": 0, 00:12:14.276 "completed_nvme_io": 0, 00:12:14.276 "transports": [] 00:12:14.276 }, 00:12:14.276 { 00:12:14.276 "name": "nvmf_tgt_poll_group_002", 00:12:14.276 "admin_qpairs": 0, 00:12:14.276 "io_qpairs": 0, 00:12:14.276 "current_admin_qpairs": 0, 00:12:14.276 "current_io_qpairs": 0, 00:12:14.276 "pending_bdev_io": 0, 00:12:14.276 "completed_nvme_io": 0, 00:12:14.276 "transports": [] 00:12:14.276 }, 00:12:14.276 { 00:12:14.276 "name": "nvmf_tgt_poll_group_003", 00:12:14.276 "admin_qpairs": 0, 00:12:14.276 "io_qpairs": 0, 00:12:14.276 "current_admin_qpairs": 0, 00:12:14.276 "current_io_qpairs": 0, 00:12:14.276 "pending_bdev_io": 0, 00:12:14.276 "completed_nvme_io": 0, 00:12:14.276 "transports": [] 00:12:14.276 } 00:12:14.276 ] 00:12:14.276 }' 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.276 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.276 [2024-11-27 12:50:40.534210] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1985e50/0x198a340) succeed. 00:12:14.276 [2024-11-27 12:50:40.543849] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19874e0/0x19cb9e0) succeed. 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:14.536 "tick_rate": 2500000000, 00:12:14.536 "poll_groups": [ 00:12:14.536 { 00:12:14.536 "name": "nvmf_tgt_poll_group_000", 00:12:14.536 "admin_qpairs": 0, 00:12:14.536 "io_qpairs": 0, 00:12:14.536 "current_admin_qpairs": 0, 00:12:14.536 "current_io_qpairs": 0, 00:12:14.536 "pending_bdev_io": 0, 00:12:14.536 "completed_nvme_io": 0, 00:12:14.536 "transports": [ 00:12:14.536 { 00:12:14.536 "trtype": "RDMA", 00:12:14.536 "pending_data_buffer": 0, 00:12:14.536 "devices": [ 00:12:14.536 { 00:12:14.536 "name": "mlx5_0", 00:12:14.536 "polls": 15941, 00:12:14.536 "idle_polls": 15941, 00:12:14.536 "completions": 0, 00:12:14.536 "requests": 0, 00:12:14.536 "request_latency": 0, 00:12:14.536 "pending_free_request": 0, 00:12:14.536 "pending_rdma_read": 0, 00:12:14.536 "pending_rdma_write": 0, 00:12:14.536 "pending_rdma_send": 0, 00:12:14.536 "total_send_wrs": 0, 00:12:14.536 "send_doorbell_updates": 0, 00:12:14.536 "total_recv_wrs": 4096, 00:12:14.536 "recv_doorbell_updates": 1 00:12:14.536 }, 00:12:14.536 { 00:12:14.536 "name": "mlx5_1", 00:12:14.536 "polls": 15941, 00:12:14.536 "idle_polls": 15941, 00:12:14.536 "completions": 0, 00:12:14.536 "requests": 0, 00:12:14.536 "request_latency": 0, 00:12:14.536 "pending_free_request": 0, 00:12:14.536 "pending_rdma_read": 0, 00:12:14.536 "pending_rdma_write": 0, 00:12:14.536 "pending_rdma_send": 0, 00:12:14.536 "total_send_wrs": 0, 00:12:14.536 "send_doorbell_updates": 0, 00:12:14.536 "total_recv_wrs": 4096, 00:12:14.536 "recv_doorbell_updates": 1 00:12:14.536 } 00:12:14.536 ] 00:12:14.536 } 00:12:14.536 ] 00:12:14.536 }, 00:12:14.536 { 00:12:14.536 "name": "nvmf_tgt_poll_group_001", 00:12:14.536 "admin_qpairs": 0, 00:12:14.536 "io_qpairs": 0, 00:12:14.536 "current_admin_qpairs": 0, 00:12:14.536 "current_io_qpairs": 0, 00:12:14.536 "pending_bdev_io": 0, 00:12:14.536 "completed_nvme_io": 0, 00:12:14.536 "transports": [ 00:12:14.536 { 00:12:14.536 "trtype": "RDMA", 00:12:14.536 "pending_data_buffer": 0, 00:12:14.536 "devices": [ 00:12:14.536 { 00:12:14.536 "name": "mlx5_0", 00:12:14.536 "polls": 9823, 00:12:14.536 "idle_polls": 9823, 00:12:14.536 "completions": 0, 00:12:14.536 "requests": 0, 00:12:14.536 "request_latency": 0, 00:12:14.536 "pending_free_request": 0, 00:12:14.536 "pending_rdma_read": 0, 00:12:14.536 "pending_rdma_write": 0, 00:12:14.536 "pending_rdma_send": 0, 00:12:14.536 "total_send_wrs": 0, 00:12:14.536 "send_doorbell_updates": 0, 00:12:14.536 "total_recv_wrs": 4096, 00:12:14.536 "recv_doorbell_updates": 1 00:12:14.536 }, 00:12:14.536 { 00:12:14.536 "name": "mlx5_1", 00:12:14.536 "polls": 9823, 00:12:14.536 "idle_polls": 9823, 00:12:14.536 "completions": 0, 00:12:14.536 "requests": 0, 00:12:14.536 "request_latency": 0, 00:12:14.536 "pending_free_request": 0, 00:12:14.536 "pending_rdma_read": 0, 00:12:14.536 "pending_rdma_write": 0, 00:12:14.536 "pending_rdma_send": 0, 00:12:14.536 "total_send_wrs": 0, 00:12:14.536 "send_doorbell_updates": 0, 00:12:14.536 "total_recv_wrs": 4096, 00:12:14.536 "recv_doorbell_updates": 1 00:12:14.536 } 00:12:14.536 ] 00:12:14.536 } 00:12:14.536 ] 00:12:14.536 }, 00:12:14.536 { 00:12:14.536 "name": "nvmf_tgt_poll_group_002", 00:12:14.536 "admin_qpairs": 0, 00:12:14.536 "io_qpairs": 0, 00:12:14.536 "current_admin_qpairs": 0, 00:12:14.536 "current_io_qpairs": 0, 00:12:14.536 "pending_bdev_io": 0, 00:12:14.536 "completed_nvme_io": 0, 00:12:14.536 "transports": [ 00:12:14.536 { 00:12:14.536 "trtype": "RDMA", 00:12:14.536 "pending_data_buffer": 0, 00:12:14.536 "devices": [ 00:12:14.536 { 00:12:14.536 "name": "mlx5_0", 00:12:14.536 "polls": 5545, 00:12:14.536 "idle_polls": 5545, 00:12:14.536 "completions": 0, 00:12:14.536 "requests": 0, 00:12:14.536 "request_latency": 0, 00:12:14.536 "pending_free_request": 0, 00:12:14.536 "pending_rdma_read": 0, 00:12:14.536 "pending_rdma_write": 0, 00:12:14.536 "pending_rdma_send": 0, 00:12:14.536 "total_send_wrs": 0, 00:12:14.536 "send_doorbell_updates": 0, 00:12:14.536 "total_recv_wrs": 4096, 00:12:14.536 "recv_doorbell_updates": 1 00:12:14.536 }, 00:12:14.536 { 00:12:14.536 "name": "mlx5_1", 00:12:14.536 "polls": 5545, 00:12:14.536 "idle_polls": 5545, 00:12:14.536 "completions": 0, 00:12:14.536 "requests": 0, 00:12:14.536 "request_latency": 0, 00:12:14.536 "pending_free_request": 0, 00:12:14.536 "pending_rdma_read": 0, 00:12:14.536 "pending_rdma_write": 0, 00:12:14.536 "pending_rdma_send": 0, 00:12:14.536 "total_send_wrs": 0, 00:12:14.536 "send_doorbell_updates": 0, 00:12:14.536 "total_recv_wrs": 4096, 00:12:14.536 "recv_doorbell_updates": 1 00:12:14.536 } 00:12:14.536 ] 00:12:14.536 } 00:12:14.536 ] 00:12:14.536 }, 00:12:14.536 { 00:12:14.536 "name": "nvmf_tgt_poll_group_003", 00:12:14.536 "admin_qpairs": 0, 00:12:14.536 "io_qpairs": 0, 00:12:14.536 "current_admin_qpairs": 0, 00:12:14.536 "current_io_qpairs": 0, 00:12:14.536 "pending_bdev_io": 0, 00:12:14.536 "completed_nvme_io": 0, 00:12:14.536 "transports": [ 00:12:14.536 { 00:12:14.536 "trtype": "RDMA", 00:12:14.536 "pending_data_buffer": 0, 00:12:14.536 "devices": [ 00:12:14.536 { 00:12:14.536 "name": "mlx5_0", 00:12:14.536 "polls": 907, 00:12:14.536 "idle_polls": 907, 00:12:14.536 "completions": 0, 00:12:14.536 "requests": 0, 00:12:14.536 "request_latency": 0, 00:12:14.536 "pending_free_request": 0, 00:12:14.536 "pending_rdma_read": 0, 00:12:14.536 "pending_rdma_write": 0, 00:12:14.536 "pending_rdma_send": 0, 00:12:14.536 "total_send_wrs": 0, 00:12:14.536 "send_doorbell_updates": 0, 00:12:14.536 "total_recv_wrs": 4096, 00:12:14.536 "recv_doorbell_updates": 1 00:12:14.536 }, 00:12:14.536 { 00:12:14.536 "name": "mlx5_1", 00:12:14.536 "polls": 907, 00:12:14.536 "idle_polls": 907, 00:12:14.536 "completions": 0, 00:12:14.536 "requests": 0, 00:12:14.536 "request_latency": 0, 00:12:14.536 "pending_free_request": 0, 00:12:14.536 "pending_rdma_read": 0, 00:12:14.536 "pending_rdma_write": 0, 00:12:14.536 "pending_rdma_send": 0, 00:12:14.536 "total_send_wrs": 0, 00:12:14.536 "send_doorbell_updates": 0, 00:12:14.536 "total_recv_wrs": 4096, 00:12:14.536 "recv_doorbell_updates": 1 00:12:14.536 } 00:12:14.536 ] 00:12:14.536 } 00:12:14.536 ] 00:12:14.536 } 00:12:14.536 ] 00:12:14.536 }' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:12:14.536 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.797 Malloc1 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.797 12:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.797 [2024-11-27 12:50:41.000549] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:12:14.797 [2024-11-27 12:50:41.046977] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:12:14.797 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:14.797 could not add new controller: failed to write to nvme-fabrics device 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.797 12:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:15.734 12:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.734 12:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:15.734 12:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.734 12:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:15.734 12:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:18.272 12:50:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:18.272 12:50:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:18.272 12:50:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:18.272 12:50:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:18.272 12:50:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:18.272 12:50:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:18.272 12:50:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:18.841 [2024-11-27 12:50:45.148419] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:12:18.841 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:18.841 could not add new controller: failed to write to nvme-fabrics device 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.841 12:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:20.218 12:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:20.218 12:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:20.219 12:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.219 12:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:20.219 12:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:22.124 12:50:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:22.124 12:50:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:22.124 12:50:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.124 12:50:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:22.124 12:50:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.124 12:50:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:22.124 12:50:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.062 [2024-11-27 12:50:49.189738] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.062 12:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:24.000 12:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:24.000 12:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:24.000 12:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.000 12:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:24.000 12:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:25.908 12:50:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:25.908 12:50:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:25.908 12:50:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.908 12:50:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:25.908 12:50:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.908 12:50:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:25.908 12:50:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.847 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.847 [2024-11-27 12:50:53.218503] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:26.848 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.848 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:26.848 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.848 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.107 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.107 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:27.107 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.107 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.107 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.107 12:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:28.046 12:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.046 12:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:28.046 12:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.046 12:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:28.046 12:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:30.094 12:50:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:30.094 12:50:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:30.094 12:50:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.094 12:50:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:30.094 12:50:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.094 12:50:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:30.094 12:50:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.108 [2024-11-27 12:50:57.255499] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.108 12:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:32.045 12:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.045 12:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:32.045 12:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.045 12:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:32.045 12:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:33.951 12:51:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:33.951 12:51:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:33.951 12:51:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.951 12:51:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:33.951 12:51:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.951 12:51:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:33.951 12:51:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.888 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.888 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:34.888 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:34.888 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.888 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:34.888 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.888 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:34.888 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:34.888 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.888 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.888 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.888 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.148 [2024-11-27 12:51:01.294643] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.148 12:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:36.086 12:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.086 12:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:36.086 12:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.086 12:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:36.086 12:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:37.990 12:51:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:37.990 12:51:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:37.990 12:51:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.990 12:51:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:37.990 12:51:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.990 12:51:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:37.990 12:51:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.927 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.927 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:38.927 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:38.927 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.927 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:38.927 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.927 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:38.927 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.927 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.927 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.927 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.927 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.927 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.927 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.186 [2024-11-27 12:51:05.328273] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.186 12:51:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:40.123 12:51:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.123 12:51:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:40.123 12:51:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.123 12:51:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:40.123 12:51:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:42.028 12:51:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:42.028 12:51:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:42.028 12:51:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.028 12:51:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:42.028 12:51:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.028 12:51:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:42.028 12:51:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.963 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.963 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:42.963 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:42.963 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.963 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:42.963 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.963 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:42.963 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:42.963 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.963 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.222 [2024-11-27 12:51:09.396221] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.222 [2024-11-27 12:51:09.444384] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.222 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 [2024-11-27 12:51:09.492561] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 [2024-11-27 12:51:09.540751] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 [2024-11-27 12:51:09.588923] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.223 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.483 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.483 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.483 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.483 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.483 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.483 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.483 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.483 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.483 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.483 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:43.483 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.483 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.483 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.483 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:43.483 "tick_rate": 2500000000, 00:12:43.483 "poll_groups": [ 00:12:43.483 { 00:12:43.483 "name": "nvmf_tgt_poll_group_000", 00:12:43.483 "admin_qpairs": 2, 00:12:43.483 "io_qpairs": 27, 00:12:43.483 "current_admin_qpairs": 0, 00:12:43.483 "current_io_qpairs": 0, 00:12:43.483 "pending_bdev_io": 0, 00:12:43.483 "completed_nvme_io": 126, 00:12:43.483 "transports": [ 00:12:43.483 { 00:12:43.483 "trtype": "RDMA", 00:12:43.483 "pending_data_buffer": 0, 00:12:43.483 "devices": [ 00:12:43.483 { 00:12:43.483 "name": "mlx5_0", 00:12:43.483 "polls": 3600777, 00:12:43.483 "idle_polls": 3600457, 00:12:43.483 "completions": 361, 00:12:43.483 "requests": 180, 00:12:43.483 "request_latency": 36743624, 00:12:43.483 "pending_free_request": 0, 00:12:43.483 "pending_rdma_read": 0, 00:12:43.483 "pending_rdma_write": 0, 00:12:43.483 "pending_rdma_send": 0, 00:12:43.483 "total_send_wrs": 305, 00:12:43.483 "send_doorbell_updates": 157, 00:12:43.483 "total_recv_wrs": 4276, 00:12:43.483 "recv_doorbell_updates": 157 00:12:43.483 }, 00:12:43.483 { 00:12:43.483 "name": "mlx5_1", 00:12:43.483 "polls": 3600777, 00:12:43.483 "idle_polls": 3600777, 00:12:43.483 "completions": 0, 00:12:43.483 "requests": 0, 00:12:43.483 "request_latency": 0, 00:12:43.483 "pending_free_request": 0, 00:12:43.483 "pending_rdma_read": 0, 00:12:43.483 "pending_rdma_write": 0, 00:12:43.483 "pending_rdma_send": 0, 00:12:43.483 "total_send_wrs": 0, 00:12:43.483 "send_doorbell_updates": 0, 00:12:43.483 "total_recv_wrs": 4096, 00:12:43.483 "recv_doorbell_updates": 1 00:12:43.483 } 00:12:43.483 ] 00:12:43.483 } 00:12:43.483 ] 00:12:43.483 }, 00:12:43.483 { 00:12:43.483 "name": "nvmf_tgt_poll_group_001", 00:12:43.483 "admin_qpairs": 2, 00:12:43.483 "io_qpairs": 26, 00:12:43.483 "current_admin_qpairs": 0, 00:12:43.483 "current_io_qpairs": 0, 00:12:43.483 "pending_bdev_io": 0, 00:12:43.483 "completed_nvme_io": 126, 00:12:43.483 "transports": [ 00:12:43.483 { 00:12:43.483 "trtype": "RDMA", 00:12:43.483 "pending_data_buffer": 0, 00:12:43.483 "devices": [ 00:12:43.483 { 00:12:43.483 "name": "mlx5_0", 00:12:43.483 "polls": 3536504, 00:12:43.483 "idle_polls": 3536187, 00:12:43.483 "completions": 356, 00:12:43.483 "requests": 178, 00:12:43.483 "request_latency": 36808178, 00:12:43.483 "pending_free_request": 0, 00:12:43.483 "pending_rdma_read": 0, 00:12:43.483 "pending_rdma_write": 0, 00:12:43.483 "pending_rdma_send": 0, 00:12:43.483 "total_send_wrs": 302, 00:12:43.483 "send_doorbell_updates": 154, 00:12:43.483 "total_recv_wrs": 4274, 00:12:43.483 "recv_doorbell_updates": 155 00:12:43.483 }, 00:12:43.483 { 00:12:43.483 "name": "mlx5_1", 00:12:43.483 "polls": 3536504, 00:12:43.483 "idle_polls": 3536504, 00:12:43.483 "completions": 0, 00:12:43.483 "requests": 0, 00:12:43.483 "request_latency": 0, 00:12:43.483 "pending_free_request": 0, 00:12:43.483 "pending_rdma_read": 0, 00:12:43.483 "pending_rdma_write": 0, 00:12:43.483 "pending_rdma_send": 0, 00:12:43.483 "total_send_wrs": 0, 00:12:43.483 "send_doorbell_updates": 0, 00:12:43.483 "total_recv_wrs": 4096, 00:12:43.483 "recv_doorbell_updates": 1 00:12:43.483 } 00:12:43.483 ] 00:12:43.483 } 00:12:43.483 ] 00:12:43.483 }, 00:12:43.483 { 00:12:43.483 "name": "nvmf_tgt_poll_group_002", 00:12:43.483 "admin_qpairs": 1, 00:12:43.483 "io_qpairs": 26, 00:12:43.483 "current_admin_qpairs": 0, 00:12:43.483 "current_io_qpairs": 0, 00:12:43.483 "pending_bdev_io": 0, 00:12:43.483 "completed_nvme_io": 126, 00:12:43.483 "transports": [ 00:12:43.483 { 00:12:43.483 "trtype": "RDMA", 00:12:43.483 "pending_data_buffer": 0, 00:12:43.483 "devices": [ 00:12:43.483 { 00:12:43.483 "name": "mlx5_0", 00:12:43.483 "polls": 3599277, 00:12:43.483 "idle_polls": 3599008, 00:12:43.483 "completions": 309, 00:12:43.483 "requests": 154, 00:12:43.483 "request_latency": 34628448, 00:12:43.483 "pending_free_request": 0, 00:12:43.483 "pending_rdma_read": 0, 00:12:43.483 "pending_rdma_write": 0, 00:12:43.483 "pending_rdma_send": 0, 00:12:43.483 "total_send_wrs": 268, 00:12:43.483 "send_doorbell_updates": 130, 00:12:43.483 "total_recv_wrs": 4250, 00:12:43.483 "recv_doorbell_updates": 130 00:12:43.483 }, 00:12:43.483 { 00:12:43.483 "name": "mlx5_1", 00:12:43.483 "polls": 3599277, 00:12:43.483 "idle_polls": 3599277, 00:12:43.483 "completions": 0, 00:12:43.483 "requests": 0, 00:12:43.483 "request_latency": 0, 00:12:43.483 "pending_free_request": 0, 00:12:43.483 "pending_rdma_read": 0, 00:12:43.483 "pending_rdma_write": 0, 00:12:43.483 "pending_rdma_send": 0, 00:12:43.484 "total_send_wrs": 0, 00:12:43.484 "send_doorbell_updates": 0, 00:12:43.484 "total_recv_wrs": 4096, 00:12:43.484 "recv_doorbell_updates": 1 00:12:43.484 } 00:12:43.484 ] 00:12:43.484 } 00:12:43.484 ] 00:12:43.484 }, 00:12:43.484 { 00:12:43.484 "name": "nvmf_tgt_poll_group_003", 00:12:43.484 "admin_qpairs": 2, 00:12:43.484 "io_qpairs": 26, 00:12:43.484 "current_admin_qpairs": 0, 00:12:43.484 "current_io_qpairs": 0, 00:12:43.484 "pending_bdev_io": 0, 00:12:43.484 "completed_nvme_io": 77, 00:12:43.484 "transports": [ 00:12:43.484 { 00:12:43.484 "trtype": "RDMA", 00:12:43.484 "pending_data_buffer": 0, 00:12:43.484 "devices": [ 00:12:43.484 { 00:12:43.484 "name": "mlx5_0", 00:12:43.484 "polls": 2858187, 00:12:43.484 "idle_polls": 2857946, 00:12:43.484 "completions": 262, 00:12:43.484 "requests": 131, 00:12:43.484 "request_latency": 22695786, 00:12:43.484 "pending_free_request": 0, 00:12:43.484 "pending_rdma_read": 0, 00:12:43.484 "pending_rdma_write": 0, 00:12:43.484 "pending_rdma_send": 0, 00:12:43.484 "total_send_wrs": 207, 00:12:43.484 "send_doorbell_updates": 119, 00:12:43.484 "total_recv_wrs": 4227, 00:12:43.484 "recv_doorbell_updates": 120 00:12:43.484 }, 00:12:43.484 { 00:12:43.484 "name": "mlx5_1", 00:12:43.484 "polls": 2858187, 00:12:43.484 "idle_polls": 2858187, 00:12:43.484 "completions": 0, 00:12:43.484 "requests": 0, 00:12:43.484 "request_latency": 0, 00:12:43.484 "pending_free_request": 0, 00:12:43.484 "pending_rdma_read": 0, 00:12:43.484 "pending_rdma_write": 0, 00:12:43.484 "pending_rdma_send": 0, 00:12:43.484 "total_send_wrs": 0, 00:12:43.484 "send_doorbell_updates": 0, 00:12:43.484 "total_recv_wrs": 4096, 00:12:43.484 "recv_doorbell_updates": 1 00:12:43.484 } 00:12:43.484 ] 00:12:43.484 } 00:12:43.484 ] 00:12:43.484 } 00:12:43.484 ] 00:12:43.484 }' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1288 > 0 )) 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 130876036 > 0 )) 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:43.484 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:43.743 rmmod nvme_rdma 00:12:43.743 rmmod nvme_fabrics 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 4105539 ']' 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 4105539 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 4105539 ']' 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 4105539 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4105539 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4105539' 00:12:43.743 killing process with pid 4105539 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 4105539 00:12:43.743 12:51:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 4105539 00:12:44.003 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:44.003 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:44.003 00:12:44.003 real 0m39.012s 00:12:44.003 user 2m4.582s 00:12:44.003 sys 0m7.996s 00:12:44.003 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.003 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.003 ************************************ 00:12:44.003 END TEST nvmf_rpc 00:12:44.003 ************************************ 00:12:44.003 12:51:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:12:44.003 12:51:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:44.003 12:51:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.003 12:51:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:44.003 ************************************ 00:12:44.003 START TEST nvmf_invalid 00:12:44.003 ************************************ 00:12:44.003 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:12:44.263 * Looking for test storage... 00:12:44.263 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.263 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:44.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.264 --rc genhtml_branch_coverage=1 00:12:44.264 --rc genhtml_function_coverage=1 00:12:44.264 --rc genhtml_legend=1 00:12:44.264 --rc geninfo_all_blocks=1 00:12:44.264 --rc geninfo_unexecuted_blocks=1 00:12:44.264 00:12:44.264 ' 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:44.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.264 --rc genhtml_branch_coverage=1 00:12:44.264 --rc genhtml_function_coverage=1 00:12:44.264 --rc genhtml_legend=1 00:12:44.264 --rc geninfo_all_blocks=1 00:12:44.264 --rc geninfo_unexecuted_blocks=1 00:12:44.264 00:12:44.264 ' 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:44.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.264 --rc genhtml_branch_coverage=1 00:12:44.264 --rc genhtml_function_coverage=1 00:12:44.264 --rc genhtml_legend=1 00:12:44.264 --rc geninfo_all_blocks=1 00:12:44.264 --rc geninfo_unexecuted_blocks=1 00:12:44.264 00:12:44.264 ' 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:44.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.264 --rc genhtml_branch_coverage=1 00:12:44.264 --rc genhtml_function_coverage=1 00:12:44.264 --rc genhtml_legend=1 00:12:44.264 --rc geninfo_all_blocks=1 00:12:44.264 --rc geninfo_unexecuted_blocks=1 00:12:44.264 00:12:44.264 ' 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:44.264 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:44.264 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:44.265 12:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:54.246 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:54.246 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:54.246 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:54.246 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:54.246 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:54.247 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:54.247 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:54.247 altname enp217s0f0np0 00:12:54.247 altname ens818f0np0 00:12:54.247 inet 192.168.100.8/24 scope global mlx_0_0 00:12:54.247 valid_lft forever preferred_lft forever 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:54.247 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:54.247 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:54.247 altname enp217s0f1np1 00:12:54.247 altname ens818f1np1 00:12:54.247 inet 192.168.100.9/24 scope global mlx_0_1 00:12:54.247 valid_lft forever preferred_lft forever 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:54.247 192.168.100.9' 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:54.247 192.168.100.9' 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:54.247 192.168.100.9' 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=4115573 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 4115573 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 4115573 ']' 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.247 12:51:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.247 [2024-11-27 12:51:19.428668] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:12:54.247 [2024-11-27 12:51:19.428725] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.247 [2024-11-27 12:51:19.519190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.247 [2024-11-27 12:51:19.559347] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.247 [2024-11-27 12:51:19.559387] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.247 [2024-11-27 12:51:19.559396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.247 [2024-11-27 12:51:19.559404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.247 [2024-11-27 12:51:19.559411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.247 [2024-11-27 12:51:19.560978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.247 [2024-11-27 12:51:19.561090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.247 [2024-11-27 12:51:19.561182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.247 [2024-11-27 12:51:19.561184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.247 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.247 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:54.247 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:54.247 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:54.247 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.248 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.248 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:54.248 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15235 00:12:54.248 [2024-11-27 12:51:20.485912] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:54.248 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:54.248 { 00:12:54.248 "nqn": "nqn.2016-06.io.spdk:cnode15235", 00:12:54.248 "tgt_name": "foobar", 00:12:54.248 "method": "nvmf_create_subsystem", 00:12:54.248 "req_id": 1 00:12:54.248 } 00:12:54.248 Got JSON-RPC error response 00:12:54.248 response: 00:12:54.248 { 00:12:54.248 "code": -32603, 00:12:54.248 "message": "Unable to find target foobar" 00:12:54.248 }' 00:12:54.248 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:54.248 { 00:12:54.248 "nqn": "nqn.2016-06.io.spdk:cnode15235", 00:12:54.248 "tgt_name": "foobar", 00:12:54.248 "method": "nvmf_create_subsystem", 00:12:54.248 "req_id": 1 00:12:54.248 } 00:12:54.248 Got JSON-RPC error response 00:12:54.248 response: 00:12:54.248 { 00:12:54.248 "code": -32603, 00:12:54.248 "message": "Unable to find target foobar" 00:12:54.248 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:54.248 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:54.248 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode30062 00:12:54.506 [2024-11-27 12:51:20.686646] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30062: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:54.507 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:54.507 { 00:12:54.507 "nqn": "nqn.2016-06.io.spdk:cnode30062", 00:12:54.507 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:54.507 "method": "nvmf_create_subsystem", 00:12:54.507 "req_id": 1 00:12:54.507 } 00:12:54.507 Got JSON-RPC error response 00:12:54.507 response: 00:12:54.507 { 00:12:54.507 "code": -32602, 00:12:54.507 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:54.507 }' 00:12:54.507 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:54.507 { 00:12:54.507 "nqn": "nqn.2016-06.io.spdk:cnode30062", 00:12:54.507 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:54.507 "method": "nvmf_create_subsystem", 00:12:54.507 "req_id": 1 00:12:54.507 } 00:12:54.507 Got JSON-RPC error response 00:12:54.507 response: 00:12:54.507 { 00:12:54.507 "code": -32602, 00:12:54.507 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:54.507 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:54.507 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:54.507 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10429 00:12:54.507 [2024-11-27 12:51:20.887234] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10429: invalid model number 'SPDK_Controller' 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:54.768 { 00:12:54.768 "nqn": "nqn.2016-06.io.spdk:cnode10429", 00:12:54.768 "model_number": "SPDK_Controller\u001f", 00:12:54.768 "method": "nvmf_create_subsystem", 00:12:54.768 "req_id": 1 00:12:54.768 } 00:12:54.768 Got JSON-RPC error response 00:12:54.768 response: 00:12:54.768 { 00:12:54.768 "code": -32602, 00:12:54.768 "message": "Invalid MN SPDK_Controller\u001f" 00:12:54.768 }' 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:54.768 { 00:12:54.768 "nqn": "nqn.2016-06.io.spdk:cnode10429", 00:12:54.768 "model_number": "SPDK_Controller\u001f", 00:12:54.768 "method": "nvmf_create_subsystem", 00:12:54.768 "req_id": 1 00:12:54.768 } 00:12:54.768 Got JSON-RPC error response 00:12:54.768 response: 00:12:54.768 { 00:12:54.768 "code": -32602, 00:12:54.768 "message": "Invalid MN SPDK_Controller\u001f" 00:12:54.768 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.768 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:54.769 12:51:20 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ r == \- ]] 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'rCZeu<\Zp)5]T:tXi%".]' 00:12:54.769 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'rCZeu<\Zp)5]T:tXi%".]' nqn.2016-06.io.spdk:cnode29092 00:12:55.029 [2024-11-27 12:51:21.260454] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29092: invalid serial number 'rCZeu<\Zp)5]T:tXi%".]' 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:55.029 { 00:12:55.029 "nqn": "nqn.2016-06.io.spdk:cnode29092", 00:12:55.029 "serial_number": "rCZeu<\\Zp)5]T:tXi%\".]", 00:12:55.029 "method": "nvmf_create_subsystem", 00:12:55.029 "req_id": 1 00:12:55.029 } 00:12:55.029 Got JSON-RPC error response 00:12:55.029 response: 00:12:55.029 { 00:12:55.029 "code": -32602, 00:12:55.029 "message": "Invalid SN rCZeu<\\Zp)5]T:tXi%\".]" 00:12:55.029 }' 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:55.029 { 00:12:55.029 "nqn": "nqn.2016-06.io.spdk:cnode29092", 00:12:55.029 "serial_number": "rCZeu<\\Zp)5]T:tXi%\".]", 00:12:55.029 "method": "nvmf_create_subsystem", 00:12:55.029 "req_id": 1 00:12:55.029 } 00:12:55.029 Got JSON-RPC error response 00:12:55.029 response: 00:12:55.029 { 00:12:55.029 "code": -32602, 00:12:55.029 "message": "Invalid SN rCZeu<\\Zp)5]T:tXi%\".]" 00:12:55.029 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.029 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.030 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.290 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ P == \- ]] 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'P'\''Rky!]df&lA2jF{I6Ff:;p5-v(>7`ca![VXCjR' 00:12:55.291 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'P'\''Rky!]df&lA2jF{I6Ff:;p5-v(>7`ca![VXCjR' nqn.2016-06.io.spdk:cnode18781 00:12:55.550 [2024-11-27 12:51:21.806259] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18781: invalid model number 'P'Rky!]df&lA2jF{I6Ff:;p5-v(>7`ca![VXCjR' 00:12:55.550 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:55.550 { 00:12:55.550 "nqn": "nqn.2016-06.io.spdk:cnode18781", 00:12:55.550 "model_number": "P'\''Rky!]df&lA2jF{I6Ff:;p5-v(\u007f>7`ca![V\u007fXCjR", 00:12:55.550 "method": "nvmf_create_subsystem", 00:12:55.550 "req_id": 1 00:12:55.550 } 00:12:55.550 Got JSON-RPC error response 00:12:55.550 response: 00:12:55.550 { 00:12:55.550 "code": -32602, 00:12:55.550 "message": "Invalid MN P'\''Rky!]df&lA2jF{I6Ff:;p5-v(\u007f>7`ca![V\u007fXCjR" 00:12:55.550 }' 00:12:55.550 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:55.550 { 00:12:55.550 "nqn": "nqn.2016-06.io.spdk:cnode18781", 00:12:55.550 "model_number": "P'Rky!]df&lA2jF{I6Ff:;p5-v(\u007f>7`ca![V\u007fXCjR", 00:12:55.550 "method": "nvmf_create_subsystem", 00:12:55.550 "req_id": 1 00:12:55.550 } 00:12:55.550 Got JSON-RPC error response 00:12:55.550 response: 00:12:55.550 { 00:12:55.550 "code": -32602, 00:12:55.550 "message": "Invalid MN P'Rky!]df&lA2jF{I6Ff:;p5-v(\u007f>7`ca![V\u007fXCjR" 00:12:55.550 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:55.550 12:51:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:12:55.809 [2024-11-27 12:51:22.033495] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x822710/0x826c00) succeed. 00:12:55.809 [2024-11-27 12:51:22.042601] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x823da0/0x8682a0) succeed. 00:12:55.809 12:51:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:56.068 12:51:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:12:56.068 12:51:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:12:56.068 192.168.100.9' 00:12:56.068 12:51:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:56.068 12:51:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:12:56.068 12:51:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:12:56.327 [2024-11-27 12:51:22.570521] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:56.327 12:51:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:56.327 { 00:12:56.327 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:56.327 "listen_address": { 00:12:56.327 "trtype": "rdma", 00:12:56.327 "traddr": "192.168.100.8", 00:12:56.327 "trsvcid": "4421" 00:12:56.327 }, 00:12:56.327 "method": "nvmf_subsystem_remove_listener", 00:12:56.327 "req_id": 1 00:12:56.327 } 00:12:56.327 Got JSON-RPC error response 00:12:56.327 response: 00:12:56.327 { 00:12:56.327 "code": -32602, 00:12:56.327 "message": "Invalid parameters" 00:12:56.327 }' 00:12:56.327 12:51:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:56.327 { 00:12:56.327 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:56.327 "listen_address": { 00:12:56.327 "trtype": "rdma", 00:12:56.327 "traddr": "192.168.100.8", 00:12:56.327 "trsvcid": "4421" 00:12:56.327 }, 00:12:56.327 "method": "nvmf_subsystem_remove_listener", 00:12:56.327 "req_id": 1 00:12:56.327 } 00:12:56.327 Got JSON-RPC error response 00:12:56.327 response: 00:12:56.327 { 00:12:56.327 "code": -32602, 00:12:56.327 "message": "Invalid parameters" 00:12:56.327 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:56.327 12:51:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18531 -i 0 00:12:56.586 [2024-11-27 12:51:22.771203] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18531: invalid cntlid range [0-65519] 00:12:56.586 12:51:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:56.586 { 00:12:56.586 "nqn": "nqn.2016-06.io.spdk:cnode18531", 00:12:56.586 "min_cntlid": 0, 00:12:56.586 "method": "nvmf_create_subsystem", 00:12:56.586 "req_id": 1 00:12:56.586 } 00:12:56.586 Got JSON-RPC error response 00:12:56.586 response: 00:12:56.586 { 00:12:56.586 "code": -32602, 00:12:56.586 "message": "Invalid cntlid range [0-65519]" 00:12:56.586 }' 00:12:56.586 12:51:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:56.586 { 00:12:56.586 "nqn": "nqn.2016-06.io.spdk:cnode18531", 00:12:56.586 "min_cntlid": 0, 00:12:56.586 "method": "nvmf_create_subsystem", 00:12:56.586 "req_id": 1 00:12:56.586 } 00:12:56.586 Got JSON-RPC error response 00:12:56.586 response: 00:12:56.586 { 00:12:56.586 "code": -32602, 00:12:56.586 "message": "Invalid cntlid range [0-65519]" 00:12:56.586 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:56.586 12:51:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18127 -i 65520 00:12:56.846 [2024-11-27 12:51:22.975945] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18127: invalid cntlid range [65520-65519] 00:12:56.846 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:56.846 { 00:12:56.846 "nqn": "nqn.2016-06.io.spdk:cnode18127", 00:12:56.846 "min_cntlid": 65520, 00:12:56.846 "method": "nvmf_create_subsystem", 00:12:56.846 "req_id": 1 00:12:56.846 } 00:12:56.846 Got JSON-RPC error response 00:12:56.846 response: 00:12:56.846 { 00:12:56.846 "code": -32602, 00:12:56.846 "message": "Invalid cntlid range [65520-65519]" 00:12:56.846 }' 00:12:56.846 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:56.846 { 00:12:56.846 "nqn": "nqn.2016-06.io.spdk:cnode18127", 00:12:56.846 "min_cntlid": 65520, 00:12:56.846 "method": "nvmf_create_subsystem", 00:12:56.846 "req_id": 1 00:12:56.846 } 00:12:56.846 Got JSON-RPC error response 00:12:56.846 response: 00:12:56.846 { 00:12:56.846 "code": -32602, 00:12:56.846 "message": "Invalid cntlid range [65520-65519]" 00:12:56.846 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:56.846 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22509 -I 0 00:12:56.846 [2024-11-27 12:51:23.184704] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22509: invalid cntlid range [1-0] 00:12:56.846 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:56.846 { 00:12:56.846 "nqn": "nqn.2016-06.io.spdk:cnode22509", 00:12:56.846 "max_cntlid": 0, 00:12:56.846 "method": "nvmf_create_subsystem", 00:12:56.846 "req_id": 1 00:12:56.846 } 00:12:56.846 Got JSON-RPC error response 00:12:56.846 response: 00:12:56.846 { 00:12:56.846 "code": -32602, 00:12:56.846 "message": "Invalid cntlid range [1-0]" 00:12:56.846 }' 00:12:56.846 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:56.846 { 00:12:56.846 "nqn": "nqn.2016-06.io.spdk:cnode22509", 00:12:56.846 "max_cntlid": 0, 00:12:56.846 "method": "nvmf_create_subsystem", 00:12:56.846 "req_id": 1 00:12:56.846 } 00:12:56.846 Got JSON-RPC error response 00:12:56.846 response: 00:12:56.846 { 00:12:56.846 "code": -32602, 00:12:56.846 "message": "Invalid cntlid range [1-0]" 00:12:56.846 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:56.846 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8494 -I 65520 00:12:57.105 [2024-11-27 12:51:23.377454] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8494: invalid cntlid range [1-65520] 00:12:57.105 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:57.105 { 00:12:57.105 "nqn": "nqn.2016-06.io.spdk:cnode8494", 00:12:57.105 "max_cntlid": 65520, 00:12:57.105 "method": "nvmf_create_subsystem", 00:12:57.105 "req_id": 1 00:12:57.105 } 00:12:57.105 Got JSON-RPC error response 00:12:57.105 response: 00:12:57.105 { 00:12:57.105 "code": -32602, 00:12:57.105 "message": "Invalid cntlid range [1-65520]" 00:12:57.105 }' 00:12:57.105 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:57.105 { 00:12:57.105 "nqn": "nqn.2016-06.io.spdk:cnode8494", 00:12:57.105 "max_cntlid": 65520, 00:12:57.105 "method": "nvmf_create_subsystem", 00:12:57.105 "req_id": 1 00:12:57.105 } 00:12:57.105 Got JSON-RPC error response 00:12:57.105 response: 00:12:57.105 { 00:12:57.105 "code": -32602, 00:12:57.105 "message": "Invalid cntlid range [1-65520]" 00:12:57.105 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.105 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10719 -i 6 -I 5 00:12:57.364 [2024-11-27 12:51:23.586230] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10719: invalid cntlid range [6-5] 00:12:57.364 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:57.364 { 00:12:57.364 "nqn": "nqn.2016-06.io.spdk:cnode10719", 00:12:57.364 "min_cntlid": 6, 00:12:57.364 "max_cntlid": 5, 00:12:57.364 "method": "nvmf_create_subsystem", 00:12:57.364 "req_id": 1 00:12:57.364 } 00:12:57.364 Got JSON-RPC error response 00:12:57.364 response: 00:12:57.364 { 00:12:57.364 "code": -32602, 00:12:57.364 "message": "Invalid cntlid range [6-5]" 00:12:57.364 }' 00:12:57.364 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:57.364 { 00:12:57.364 "nqn": "nqn.2016-06.io.spdk:cnode10719", 00:12:57.364 "min_cntlid": 6, 00:12:57.364 "max_cntlid": 5, 00:12:57.364 "method": "nvmf_create_subsystem", 00:12:57.364 "req_id": 1 00:12:57.364 } 00:12:57.364 Got JSON-RPC error response 00:12:57.364 response: 00:12:57.364 { 00:12:57.364 "code": -32602, 00:12:57.364 "message": "Invalid cntlid range [6-5]" 00:12:57.364 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.364 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:57.364 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:57.364 { 00:12:57.364 "name": "foobar", 00:12:57.364 "method": "nvmf_delete_target", 00:12:57.364 "req_id": 1 00:12:57.364 } 00:12:57.364 Got JSON-RPC error response 00:12:57.364 response: 00:12:57.364 { 00:12:57.364 "code": -32602, 00:12:57.364 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:57.364 }' 00:12:57.364 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:57.364 { 00:12:57.364 "name": "foobar", 00:12:57.364 "method": "nvmf_delete_target", 00:12:57.364 "req_id": 1 00:12:57.364 } 00:12:57.364 Got JSON-RPC error response 00:12:57.364 response: 00:12:57.364 { 00:12:57.364 "code": -32602, 00:12:57.364 "message": "The specified target doesn't exist, cannot delete it." 00:12:57.364 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:57.364 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:57.364 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:57.364 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:57.364 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:57.364 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:57.364 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:57.364 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:57.365 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.365 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:57.365 rmmod nvme_rdma 00:12:57.624 rmmod nvme_fabrics 00:12:57.624 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.624 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:57.624 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:57.625 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 4115573 ']' 00:12:57.625 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 4115573 00:12:57.625 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 4115573 ']' 00:12:57.625 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 4115573 00:12:57.625 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:57.625 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.625 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4115573 00:12:57.625 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.625 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.625 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4115573' 00:12:57.625 killing process with pid 4115573 00:12:57.625 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 4115573 00:12:57.625 12:51:23 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 4115573 00:12:57.884 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:57.884 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:57.884 00:12:57.884 real 0m13.770s 00:12:57.884 user 0m22.958s 00:12:57.884 sys 0m8.024s 00:12:57.884 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.884 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:57.884 ************************************ 00:12:57.884 END TEST nvmf_invalid 00:12:57.884 ************************************ 00:12:57.884 12:51:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:12:57.884 12:51:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:57.884 12:51:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.884 12:51:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:57.884 ************************************ 00:12:57.884 START TEST nvmf_connect_stress 00:12:57.884 ************************************ 00:12:57.885 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:12:57.885 * Looking for test storage... 00:12:57.885 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:57.885 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:57.885 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:12:57.885 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:58.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.144 --rc genhtml_branch_coverage=1 00:12:58.144 --rc genhtml_function_coverage=1 00:12:58.144 --rc genhtml_legend=1 00:12:58.144 --rc geninfo_all_blocks=1 00:12:58.144 --rc geninfo_unexecuted_blocks=1 00:12:58.144 00:12:58.144 ' 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:58.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.144 --rc genhtml_branch_coverage=1 00:12:58.144 --rc genhtml_function_coverage=1 00:12:58.144 --rc genhtml_legend=1 00:12:58.144 --rc geninfo_all_blocks=1 00:12:58.144 --rc geninfo_unexecuted_blocks=1 00:12:58.144 00:12:58.144 ' 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:58.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.144 --rc genhtml_branch_coverage=1 00:12:58.144 --rc genhtml_function_coverage=1 00:12:58.144 --rc genhtml_legend=1 00:12:58.144 --rc geninfo_all_blocks=1 00:12:58.144 --rc geninfo_unexecuted_blocks=1 00:12:58.144 00:12:58.144 ' 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:58.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.144 --rc genhtml_branch_coverage=1 00:12:58.144 --rc genhtml_function_coverage=1 00:12:58.144 --rc genhtml_legend=1 00:12:58.144 --rc geninfo_all_blocks=1 00:12:58.144 --rc geninfo_unexecuted_blocks=1 00:12:58.144 00:12:58.144 ' 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:58.144 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.145 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:58.145 12:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:06.290 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:06.290 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:06.290 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:06.290 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:13:06.290 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:06.291 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:06.291 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:06.291 altname enp217s0f0np0 00:13:06.291 altname ens818f0np0 00:13:06.291 inet 192.168.100.8/24 scope global mlx_0_0 00:13:06.291 valid_lft forever preferred_lft forever 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:06.291 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:06.291 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:06.291 altname enp217s0f1np1 00:13:06.291 altname ens818f1np1 00:13:06.291 inet 192.168.100.9/24 scope global mlx_0_1 00:13:06.291 valid_lft forever preferred_lft forever 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:06.291 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:06.292 192.168.100.9' 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:06.292 192.168.100.9' 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:06.292 192.168.100.9' 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=4120465 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 4120465 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 4120465 ']' 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.292 12:51:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.292 [2024-11-27 12:51:32.342395] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:13:06.292 [2024-11-27 12:51:32.342454] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.292 [2024-11-27 12:51:32.432284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:06.292 [2024-11-27 12:51:32.469811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.292 [2024-11-27 12:51:32.469852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.292 [2024-11-27 12:51:32.469861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.292 [2024-11-27 12:51:32.469869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.292 [2024-11-27 12:51:32.469893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.292 [2024-11-27 12:51:32.471452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.292 [2024-11-27 12:51:32.471521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.292 [2024-11-27 12:51:32.471523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.861 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.861 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:06.861 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:06.861 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:06.861 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.861 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.861 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:06.861 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.861 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.121 [2024-11-27 12:51:33.255759] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x590570/0x594a60) succeed. 00:13:07.121 [2024-11-27 12:51:33.264774] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x591b60/0x5d6100) succeed. 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.121 [2024-11-27 12:51:33.385472] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.121 NULL1 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=4120749 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.121 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.122 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.380 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.380 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.380 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:07.380 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.381 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.381 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.639 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.639 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:07.639 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.639 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.639 12:51:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.898 12:51:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.898 12:51:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:07.898 12:51:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.898 12:51:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.898 12:51:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.157 12:51:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.157 12:51:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:08.157 12:51:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.157 12:51:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.157 12:51:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.724 12:51:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.724 12:51:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:08.724 12:51:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.724 12:51:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.724 12:51:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.983 12:51:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.983 12:51:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:08.983 12:51:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.983 12:51:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.983 12:51:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.242 12:51:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.242 12:51:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:09.242 12:51:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.242 12:51:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.242 12:51:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.500 12:51:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.500 12:51:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:09.500 12:51:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.500 12:51:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.500 12:51:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.758 12:51:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.758 12:51:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:09.758 12:51:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.758 12:51:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.758 12:51:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.323 12:51:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.323 12:51:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:10.323 12:51:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.323 12:51:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.323 12:51:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.581 12:51:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.581 12:51:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:10.581 12:51:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.581 12:51:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.581 12:51:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.839 12:51:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.839 12:51:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:10.839 12:51:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.839 12:51:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.839 12:51:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.097 12:51:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.097 12:51:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:11.097 12:51:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.097 12:51:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.097 12:51:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.664 12:51:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.664 12:51:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:11.664 12:51:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.664 12:51:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.664 12:51:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.922 12:51:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.922 12:51:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:11.922 12:51:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.922 12:51:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.922 12:51:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.180 12:51:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.180 12:51:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:12.180 12:51:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.180 12:51:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.180 12:51:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.439 12:51:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.439 12:51:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:12.439 12:51:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.439 12:51:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.439 12:51:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.697 12:51:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.697 12:51:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:12.697 12:51:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.697 12:51:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.697 12:51:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.265 12:51:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.265 12:51:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:13.265 12:51:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.265 12:51:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.265 12:51:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.524 12:51:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.524 12:51:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:13.524 12:51:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.524 12:51:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.524 12:51:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.783 12:51:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.783 12:51:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:13.783 12:51:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.783 12:51:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.783 12:51:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.041 12:51:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.041 12:51:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:14.041 12:51:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.041 12:51:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.041 12:51:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.610 12:51:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.610 12:51:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:14.610 12:51:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.610 12:51:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.610 12:51:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.869 12:51:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.869 12:51:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:14.869 12:51:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.869 12:51:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.869 12:51:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.127 12:51:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.127 12:51:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:15.127 12:51:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.127 12:51:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.127 12:51:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.385 12:51:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.385 12:51:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:15.385 12:51:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.385 12:51:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.385 12:51:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.643 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.643 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:15.643 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.643 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.643 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.210 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.211 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:16.211 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.211 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.211 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.469 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.469 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:16.469 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.469 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.469 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.728 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.728 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:16.728 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.728 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.728 12:51:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.987 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.987 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:16.987 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.987 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.987 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.246 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:13:17.505 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.505 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4120749 00:13:17.505 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4120749) - No such process 00:13:17.505 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 4120749 00:13:17.505 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:17.505 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:17.505 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:17.505 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:17.505 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:17.505 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:17.505 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:17.505 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:17.505 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.505 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:17.506 rmmod nvme_rdma 00:13:17.506 rmmod nvme_fabrics 00:13:17.506 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.506 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:17.506 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:17.506 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 4120465 ']' 00:13:17.506 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 4120465 00:13:17.506 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 4120465 ']' 00:13:17.506 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 4120465 00:13:17.506 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:17.506 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.506 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4120465 00:13:17.506 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:17.506 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:17.506 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4120465' 00:13:17.506 killing process with pid 4120465 00:13:17.506 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 4120465 00:13:17.506 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 4120465 00:13:17.765 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:17.765 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:17.765 00:13:17.765 real 0m19.832s 00:13:17.765 user 0m42.162s 00:13:17.765 sys 0m8.687s 00:13:17.765 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.765 12:51:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.765 ************************************ 00:13:17.765 END TEST nvmf_connect_stress 00:13:17.765 ************************************ 00:13:17.765 12:51:44 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:13:17.765 12:51:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:17.765 12:51:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.765 12:51:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:17.765 ************************************ 00:13:17.765 START TEST nvmf_fused_ordering 00:13:17.765 ************************************ 00:13:17.765 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:13:18.025 * Looking for test storage... 00:13:18.025 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:18.025 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:18.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.026 --rc genhtml_branch_coverage=1 00:13:18.026 --rc genhtml_function_coverage=1 00:13:18.026 --rc genhtml_legend=1 00:13:18.026 --rc geninfo_all_blocks=1 00:13:18.026 --rc geninfo_unexecuted_blocks=1 00:13:18.026 00:13:18.026 ' 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:18.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.026 --rc genhtml_branch_coverage=1 00:13:18.026 --rc genhtml_function_coverage=1 00:13:18.026 --rc genhtml_legend=1 00:13:18.026 --rc geninfo_all_blocks=1 00:13:18.026 --rc geninfo_unexecuted_blocks=1 00:13:18.026 00:13:18.026 ' 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:18.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.026 --rc genhtml_branch_coverage=1 00:13:18.026 --rc genhtml_function_coverage=1 00:13:18.026 --rc genhtml_legend=1 00:13:18.026 --rc geninfo_all_blocks=1 00:13:18.026 --rc geninfo_unexecuted_blocks=1 00:13:18.026 00:13:18.026 ' 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:18.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.026 --rc genhtml_branch_coverage=1 00:13:18.026 --rc genhtml_function_coverage=1 00:13:18.026 --rc genhtml_legend=1 00:13:18.026 --rc geninfo_all_blocks=1 00:13:18.026 --rc geninfo_unexecuted_blocks=1 00:13:18.026 00:13:18.026 ' 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:18.026 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:18.026 12:51:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.009 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:28.010 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:28.010 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:28.010 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:28.010 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:28.010 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:28.010 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:28.010 altname enp217s0f0np0 00:13:28.010 altname ens818f0np0 00:13:28.010 inet 192.168.100.8/24 scope global mlx_0_0 00:13:28.010 valid_lft forever preferred_lft forever 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:28.010 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:28.010 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:28.010 altname enp217s0f1np1 00:13:28.010 altname ens818f1np1 00:13:28.010 inet 192.168.100.9/24 scope global mlx_0_1 00:13:28.010 valid_lft forever preferred_lft forever 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:28.010 12:51:52 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:28.010 192.168.100.9' 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:28.010 192.168.100.9' 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:28.010 192.168.100.9' 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=4126656 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 4126656 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 4126656 ']' 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.010 [2024-11-27 12:51:53.166095] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:13:28.010 [2024-11-27 12:51:53.166155] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.010 [2024-11-27 12:51:53.257636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.010 [2024-11-27 12:51:53.295236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.010 [2024-11-27 12:51:53.295271] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.010 [2024-11-27 12:51:53.295280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.010 [2024-11-27 12:51:53.295288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.010 [2024-11-27 12:51:53.295310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.010 [2024-11-27 12:51:53.295913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:28.010 12:51:53 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.010 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.010 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:28.010 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.010 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.010 [2024-11-27 12:51:54.063941] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x845ea0/0x84a390) succeed. 00:13:28.010 [2024-11-27 12:51:54.072533] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x847350/0x88ba30) succeed. 00:13:28.010 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.010 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:28.010 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.010 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.010 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.010 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:28.010 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.010 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.010 [2024-11-27 12:51:54.118391] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:28.010 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.011 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:28.011 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.011 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.011 NULL1 00:13:28.011 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.011 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:28.011 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.011 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.011 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.011 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:28.011 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.011 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.011 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.011 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:28.011 [2024-11-27 12:51:54.179204] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:13:28.011 [2024-11-27 12:51:54.179241] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4126830 ] 00:13:28.310 Attached to nqn.2016-06.io.spdk:cnode1 00:13:28.310 Namespace ID: 1 size: 1GB 00:13:28.310 fused_ordering(0) 00:13:28.310 fused_ordering(1) 00:13:28.310 fused_ordering(2) 00:13:28.310 fused_ordering(3) 00:13:28.310 fused_ordering(4) 00:13:28.310 fused_ordering(5) 00:13:28.310 fused_ordering(6) 00:13:28.310 fused_ordering(7) 00:13:28.310 fused_ordering(8) 00:13:28.310 fused_ordering(9) 00:13:28.310 fused_ordering(10) 00:13:28.310 fused_ordering(11) 00:13:28.310 fused_ordering(12) 00:13:28.310 fused_ordering(13) 00:13:28.310 fused_ordering(14) 00:13:28.310 fused_ordering(15) 00:13:28.310 fused_ordering(16) 00:13:28.310 fused_ordering(17) 00:13:28.310 fused_ordering(18) 00:13:28.310 fused_ordering(19) 00:13:28.310 fused_ordering(20) 00:13:28.310 fused_ordering(21) 00:13:28.310 fused_ordering(22) 00:13:28.310 fused_ordering(23) 00:13:28.310 fused_ordering(24) 00:13:28.310 fused_ordering(25) 00:13:28.310 fused_ordering(26) 00:13:28.310 fused_ordering(27) 00:13:28.310 fused_ordering(28) 00:13:28.310 fused_ordering(29) 00:13:28.310 fused_ordering(30) 00:13:28.310 fused_ordering(31) 00:13:28.310 fused_ordering(32) 00:13:28.310 fused_ordering(33) 00:13:28.310 fused_ordering(34) 00:13:28.310 fused_ordering(35) 00:13:28.310 fused_ordering(36) 00:13:28.310 fused_ordering(37) 00:13:28.310 fused_ordering(38) 00:13:28.310 fused_ordering(39) 00:13:28.310 fused_ordering(40) 00:13:28.310 fused_ordering(41) 00:13:28.310 fused_ordering(42) 00:13:28.310 fused_ordering(43) 00:13:28.310 fused_ordering(44) 00:13:28.310 fused_ordering(45) 00:13:28.310 fused_ordering(46) 00:13:28.310 fused_ordering(47) 00:13:28.310 fused_ordering(48) 00:13:28.310 fused_ordering(49) 00:13:28.310 fused_ordering(50) 00:13:28.310 fused_ordering(51) 00:13:28.310 fused_ordering(52) 00:13:28.310 fused_ordering(53) 00:13:28.310 fused_ordering(54) 00:13:28.310 fused_ordering(55) 00:13:28.310 fused_ordering(56) 00:13:28.310 fused_ordering(57) 00:13:28.310 fused_ordering(58) 00:13:28.310 fused_ordering(59) 00:13:28.310 fused_ordering(60) 00:13:28.310 fused_ordering(61) 00:13:28.310 fused_ordering(62) 00:13:28.310 fused_ordering(63) 00:13:28.310 fused_ordering(64) 00:13:28.310 fused_ordering(65) 00:13:28.310 fused_ordering(66) 00:13:28.310 fused_ordering(67) 00:13:28.310 fused_ordering(68) 00:13:28.310 fused_ordering(69) 00:13:28.310 fused_ordering(70) 00:13:28.310 fused_ordering(71) 00:13:28.310 fused_ordering(72) 00:13:28.310 fused_ordering(73) 00:13:28.310 fused_ordering(74) 00:13:28.310 fused_ordering(75) 00:13:28.310 fused_ordering(76) 00:13:28.310 fused_ordering(77) 00:13:28.310 fused_ordering(78) 00:13:28.310 fused_ordering(79) 00:13:28.310 fused_ordering(80) 00:13:28.310 fused_ordering(81) 00:13:28.310 fused_ordering(82) 00:13:28.310 fused_ordering(83) 00:13:28.310 fused_ordering(84) 00:13:28.310 fused_ordering(85) 00:13:28.310 fused_ordering(86) 00:13:28.310 fused_ordering(87) 00:13:28.310 fused_ordering(88) 00:13:28.310 fused_ordering(89) 00:13:28.310 fused_ordering(90) 00:13:28.310 fused_ordering(91) 00:13:28.310 fused_ordering(92) 00:13:28.310 fused_ordering(93) 00:13:28.310 fused_ordering(94) 00:13:28.310 fused_ordering(95) 00:13:28.310 fused_ordering(96) 00:13:28.310 fused_ordering(97) 00:13:28.310 fused_ordering(98) 00:13:28.310 fused_ordering(99) 00:13:28.310 fused_ordering(100) 00:13:28.310 fused_ordering(101) 00:13:28.310 fused_ordering(102) 00:13:28.310 fused_ordering(103) 00:13:28.310 fused_ordering(104) 00:13:28.310 fused_ordering(105) 00:13:28.310 fused_ordering(106) 00:13:28.310 fused_ordering(107) 00:13:28.310 fused_ordering(108) 00:13:28.310 fused_ordering(109) 00:13:28.310 fused_ordering(110) 00:13:28.310 fused_ordering(111) 00:13:28.310 fused_ordering(112) 00:13:28.310 fused_ordering(113) 00:13:28.310 fused_ordering(114) 00:13:28.310 fused_ordering(115) 00:13:28.310 fused_ordering(116) 00:13:28.310 fused_ordering(117) 00:13:28.310 fused_ordering(118) 00:13:28.310 fused_ordering(119) 00:13:28.310 fused_ordering(120) 00:13:28.310 fused_ordering(121) 00:13:28.310 fused_ordering(122) 00:13:28.310 fused_ordering(123) 00:13:28.310 fused_ordering(124) 00:13:28.310 fused_ordering(125) 00:13:28.310 fused_ordering(126) 00:13:28.310 fused_ordering(127) 00:13:28.310 fused_ordering(128) 00:13:28.310 fused_ordering(129) 00:13:28.310 fused_ordering(130) 00:13:28.310 fused_ordering(131) 00:13:28.310 fused_ordering(132) 00:13:28.310 fused_ordering(133) 00:13:28.310 fused_ordering(134) 00:13:28.310 fused_ordering(135) 00:13:28.310 fused_ordering(136) 00:13:28.310 fused_ordering(137) 00:13:28.310 fused_ordering(138) 00:13:28.310 fused_ordering(139) 00:13:28.310 fused_ordering(140) 00:13:28.310 fused_ordering(141) 00:13:28.310 fused_ordering(142) 00:13:28.310 fused_ordering(143) 00:13:28.310 fused_ordering(144) 00:13:28.310 fused_ordering(145) 00:13:28.310 fused_ordering(146) 00:13:28.310 fused_ordering(147) 00:13:28.310 fused_ordering(148) 00:13:28.310 fused_ordering(149) 00:13:28.310 fused_ordering(150) 00:13:28.310 fused_ordering(151) 00:13:28.310 fused_ordering(152) 00:13:28.310 fused_ordering(153) 00:13:28.310 fused_ordering(154) 00:13:28.310 fused_ordering(155) 00:13:28.310 fused_ordering(156) 00:13:28.310 fused_ordering(157) 00:13:28.310 fused_ordering(158) 00:13:28.310 fused_ordering(159) 00:13:28.310 fused_ordering(160) 00:13:28.310 fused_ordering(161) 00:13:28.310 fused_ordering(162) 00:13:28.310 fused_ordering(163) 00:13:28.310 fused_ordering(164) 00:13:28.310 fused_ordering(165) 00:13:28.310 fused_ordering(166) 00:13:28.310 fused_ordering(167) 00:13:28.310 fused_ordering(168) 00:13:28.310 fused_ordering(169) 00:13:28.310 fused_ordering(170) 00:13:28.310 fused_ordering(171) 00:13:28.310 fused_ordering(172) 00:13:28.310 fused_ordering(173) 00:13:28.310 fused_ordering(174) 00:13:28.310 fused_ordering(175) 00:13:28.310 fused_ordering(176) 00:13:28.310 fused_ordering(177) 00:13:28.310 fused_ordering(178) 00:13:28.310 fused_ordering(179) 00:13:28.310 fused_ordering(180) 00:13:28.310 fused_ordering(181) 00:13:28.310 fused_ordering(182) 00:13:28.310 fused_ordering(183) 00:13:28.310 fused_ordering(184) 00:13:28.310 fused_ordering(185) 00:13:28.310 fused_ordering(186) 00:13:28.310 fused_ordering(187) 00:13:28.310 fused_ordering(188) 00:13:28.310 fused_ordering(189) 00:13:28.310 fused_ordering(190) 00:13:28.310 fused_ordering(191) 00:13:28.310 fused_ordering(192) 00:13:28.310 fused_ordering(193) 00:13:28.310 fused_ordering(194) 00:13:28.310 fused_ordering(195) 00:13:28.310 fused_ordering(196) 00:13:28.310 fused_ordering(197) 00:13:28.310 fused_ordering(198) 00:13:28.310 fused_ordering(199) 00:13:28.310 fused_ordering(200) 00:13:28.310 fused_ordering(201) 00:13:28.310 fused_ordering(202) 00:13:28.310 fused_ordering(203) 00:13:28.310 fused_ordering(204) 00:13:28.310 fused_ordering(205) 00:13:28.310 fused_ordering(206) 00:13:28.310 fused_ordering(207) 00:13:28.310 fused_ordering(208) 00:13:28.310 fused_ordering(209) 00:13:28.310 fused_ordering(210) 00:13:28.310 fused_ordering(211) 00:13:28.310 fused_ordering(212) 00:13:28.310 fused_ordering(213) 00:13:28.310 fused_ordering(214) 00:13:28.310 fused_ordering(215) 00:13:28.310 fused_ordering(216) 00:13:28.310 fused_ordering(217) 00:13:28.310 fused_ordering(218) 00:13:28.310 fused_ordering(219) 00:13:28.310 fused_ordering(220) 00:13:28.310 fused_ordering(221) 00:13:28.310 fused_ordering(222) 00:13:28.310 fused_ordering(223) 00:13:28.310 fused_ordering(224) 00:13:28.310 fused_ordering(225) 00:13:28.310 fused_ordering(226) 00:13:28.310 fused_ordering(227) 00:13:28.310 fused_ordering(228) 00:13:28.311 fused_ordering(229) 00:13:28.311 fused_ordering(230) 00:13:28.311 fused_ordering(231) 00:13:28.311 fused_ordering(232) 00:13:28.311 fused_ordering(233) 00:13:28.311 fused_ordering(234) 00:13:28.311 fused_ordering(235) 00:13:28.311 fused_ordering(236) 00:13:28.311 fused_ordering(237) 00:13:28.311 fused_ordering(238) 00:13:28.311 fused_ordering(239) 00:13:28.311 fused_ordering(240) 00:13:28.311 fused_ordering(241) 00:13:28.311 fused_ordering(242) 00:13:28.311 fused_ordering(243) 00:13:28.311 fused_ordering(244) 00:13:28.311 fused_ordering(245) 00:13:28.311 fused_ordering(246) 00:13:28.311 fused_ordering(247) 00:13:28.311 fused_ordering(248) 00:13:28.311 fused_ordering(249) 00:13:28.311 fused_ordering(250) 00:13:28.311 fused_ordering(251) 00:13:28.311 fused_ordering(252) 00:13:28.311 fused_ordering(253) 00:13:28.311 fused_ordering(254) 00:13:28.311 fused_ordering(255) 00:13:28.311 fused_ordering(256) 00:13:28.311 fused_ordering(257) 00:13:28.311 fused_ordering(258) 00:13:28.311 fused_ordering(259) 00:13:28.311 fused_ordering(260) 00:13:28.311 fused_ordering(261) 00:13:28.311 fused_ordering(262) 00:13:28.311 fused_ordering(263) 00:13:28.311 fused_ordering(264) 00:13:28.311 fused_ordering(265) 00:13:28.311 fused_ordering(266) 00:13:28.311 fused_ordering(267) 00:13:28.311 fused_ordering(268) 00:13:28.311 fused_ordering(269) 00:13:28.311 fused_ordering(270) 00:13:28.311 fused_ordering(271) 00:13:28.311 fused_ordering(272) 00:13:28.311 fused_ordering(273) 00:13:28.311 fused_ordering(274) 00:13:28.311 fused_ordering(275) 00:13:28.311 fused_ordering(276) 00:13:28.311 fused_ordering(277) 00:13:28.311 fused_ordering(278) 00:13:28.311 fused_ordering(279) 00:13:28.311 fused_ordering(280) 00:13:28.311 fused_ordering(281) 00:13:28.311 fused_ordering(282) 00:13:28.311 fused_ordering(283) 00:13:28.311 fused_ordering(284) 00:13:28.311 fused_ordering(285) 00:13:28.311 fused_ordering(286) 00:13:28.311 fused_ordering(287) 00:13:28.311 fused_ordering(288) 00:13:28.311 fused_ordering(289) 00:13:28.311 fused_ordering(290) 00:13:28.311 fused_ordering(291) 00:13:28.311 fused_ordering(292) 00:13:28.311 fused_ordering(293) 00:13:28.311 fused_ordering(294) 00:13:28.311 fused_ordering(295) 00:13:28.311 fused_ordering(296) 00:13:28.311 fused_ordering(297) 00:13:28.311 fused_ordering(298) 00:13:28.311 fused_ordering(299) 00:13:28.311 fused_ordering(300) 00:13:28.311 fused_ordering(301) 00:13:28.311 fused_ordering(302) 00:13:28.311 fused_ordering(303) 00:13:28.311 fused_ordering(304) 00:13:28.311 fused_ordering(305) 00:13:28.311 fused_ordering(306) 00:13:28.311 fused_ordering(307) 00:13:28.311 fused_ordering(308) 00:13:28.311 fused_ordering(309) 00:13:28.311 fused_ordering(310) 00:13:28.311 fused_ordering(311) 00:13:28.311 fused_ordering(312) 00:13:28.311 fused_ordering(313) 00:13:28.311 fused_ordering(314) 00:13:28.311 fused_ordering(315) 00:13:28.311 fused_ordering(316) 00:13:28.311 fused_ordering(317) 00:13:28.311 fused_ordering(318) 00:13:28.311 fused_ordering(319) 00:13:28.311 fused_ordering(320) 00:13:28.311 fused_ordering(321) 00:13:28.311 fused_ordering(322) 00:13:28.311 fused_ordering(323) 00:13:28.311 fused_ordering(324) 00:13:28.311 fused_ordering(325) 00:13:28.311 fused_ordering(326) 00:13:28.311 fused_ordering(327) 00:13:28.311 fused_ordering(328) 00:13:28.311 fused_ordering(329) 00:13:28.311 fused_ordering(330) 00:13:28.311 fused_ordering(331) 00:13:28.311 fused_ordering(332) 00:13:28.311 fused_ordering(333) 00:13:28.311 fused_ordering(334) 00:13:28.311 fused_ordering(335) 00:13:28.311 fused_ordering(336) 00:13:28.311 fused_ordering(337) 00:13:28.311 fused_ordering(338) 00:13:28.311 fused_ordering(339) 00:13:28.311 fused_ordering(340) 00:13:28.311 fused_ordering(341) 00:13:28.311 fused_ordering(342) 00:13:28.311 fused_ordering(343) 00:13:28.311 fused_ordering(344) 00:13:28.311 fused_ordering(345) 00:13:28.311 fused_ordering(346) 00:13:28.311 fused_ordering(347) 00:13:28.311 fused_ordering(348) 00:13:28.311 fused_ordering(349) 00:13:28.311 fused_ordering(350) 00:13:28.311 fused_ordering(351) 00:13:28.311 fused_ordering(352) 00:13:28.311 fused_ordering(353) 00:13:28.311 fused_ordering(354) 00:13:28.311 fused_ordering(355) 00:13:28.311 fused_ordering(356) 00:13:28.311 fused_ordering(357) 00:13:28.311 fused_ordering(358) 00:13:28.311 fused_ordering(359) 00:13:28.311 fused_ordering(360) 00:13:28.311 fused_ordering(361) 00:13:28.311 fused_ordering(362) 00:13:28.311 fused_ordering(363) 00:13:28.311 fused_ordering(364) 00:13:28.311 fused_ordering(365) 00:13:28.311 fused_ordering(366) 00:13:28.311 fused_ordering(367) 00:13:28.311 fused_ordering(368) 00:13:28.311 fused_ordering(369) 00:13:28.311 fused_ordering(370) 00:13:28.311 fused_ordering(371) 00:13:28.311 fused_ordering(372) 00:13:28.311 fused_ordering(373) 00:13:28.311 fused_ordering(374) 00:13:28.311 fused_ordering(375) 00:13:28.311 fused_ordering(376) 00:13:28.311 fused_ordering(377) 00:13:28.311 fused_ordering(378) 00:13:28.311 fused_ordering(379) 00:13:28.311 fused_ordering(380) 00:13:28.311 fused_ordering(381) 00:13:28.311 fused_ordering(382) 00:13:28.311 fused_ordering(383) 00:13:28.311 fused_ordering(384) 00:13:28.311 fused_ordering(385) 00:13:28.311 fused_ordering(386) 00:13:28.311 fused_ordering(387) 00:13:28.311 fused_ordering(388) 00:13:28.311 fused_ordering(389) 00:13:28.311 fused_ordering(390) 00:13:28.311 fused_ordering(391) 00:13:28.311 fused_ordering(392) 00:13:28.311 fused_ordering(393) 00:13:28.311 fused_ordering(394) 00:13:28.311 fused_ordering(395) 00:13:28.311 fused_ordering(396) 00:13:28.311 fused_ordering(397) 00:13:28.311 fused_ordering(398) 00:13:28.311 fused_ordering(399) 00:13:28.311 fused_ordering(400) 00:13:28.311 fused_ordering(401) 00:13:28.311 fused_ordering(402) 00:13:28.311 fused_ordering(403) 00:13:28.311 fused_ordering(404) 00:13:28.311 fused_ordering(405) 00:13:28.311 fused_ordering(406) 00:13:28.311 fused_ordering(407) 00:13:28.311 fused_ordering(408) 00:13:28.311 fused_ordering(409) 00:13:28.311 fused_ordering(410) 00:13:28.311 fused_ordering(411) 00:13:28.311 fused_ordering(412) 00:13:28.311 fused_ordering(413) 00:13:28.311 fused_ordering(414) 00:13:28.311 fused_ordering(415) 00:13:28.311 fused_ordering(416) 00:13:28.311 fused_ordering(417) 00:13:28.311 fused_ordering(418) 00:13:28.311 fused_ordering(419) 00:13:28.311 fused_ordering(420) 00:13:28.311 fused_ordering(421) 00:13:28.311 fused_ordering(422) 00:13:28.311 fused_ordering(423) 00:13:28.311 fused_ordering(424) 00:13:28.311 fused_ordering(425) 00:13:28.311 fused_ordering(426) 00:13:28.311 fused_ordering(427) 00:13:28.311 fused_ordering(428) 00:13:28.311 fused_ordering(429) 00:13:28.311 fused_ordering(430) 00:13:28.311 fused_ordering(431) 00:13:28.311 fused_ordering(432) 00:13:28.311 fused_ordering(433) 00:13:28.311 fused_ordering(434) 00:13:28.311 fused_ordering(435) 00:13:28.311 fused_ordering(436) 00:13:28.311 fused_ordering(437) 00:13:28.311 fused_ordering(438) 00:13:28.311 fused_ordering(439) 00:13:28.311 fused_ordering(440) 00:13:28.311 fused_ordering(441) 00:13:28.311 fused_ordering(442) 00:13:28.311 fused_ordering(443) 00:13:28.311 fused_ordering(444) 00:13:28.311 fused_ordering(445) 00:13:28.311 fused_ordering(446) 00:13:28.311 fused_ordering(447) 00:13:28.311 fused_ordering(448) 00:13:28.311 fused_ordering(449) 00:13:28.311 fused_ordering(450) 00:13:28.311 fused_ordering(451) 00:13:28.311 fused_ordering(452) 00:13:28.311 fused_ordering(453) 00:13:28.311 fused_ordering(454) 00:13:28.311 fused_ordering(455) 00:13:28.311 fused_ordering(456) 00:13:28.311 fused_ordering(457) 00:13:28.311 fused_ordering(458) 00:13:28.311 fused_ordering(459) 00:13:28.311 fused_ordering(460) 00:13:28.311 fused_ordering(461) 00:13:28.311 fused_ordering(462) 00:13:28.311 fused_ordering(463) 00:13:28.311 fused_ordering(464) 00:13:28.311 fused_ordering(465) 00:13:28.311 fused_ordering(466) 00:13:28.311 fused_ordering(467) 00:13:28.311 fused_ordering(468) 00:13:28.311 fused_ordering(469) 00:13:28.311 fused_ordering(470) 00:13:28.311 fused_ordering(471) 00:13:28.311 fused_ordering(472) 00:13:28.311 fused_ordering(473) 00:13:28.311 fused_ordering(474) 00:13:28.311 fused_ordering(475) 00:13:28.311 fused_ordering(476) 00:13:28.311 fused_ordering(477) 00:13:28.311 fused_ordering(478) 00:13:28.311 fused_ordering(479) 00:13:28.311 fused_ordering(480) 00:13:28.311 fused_ordering(481) 00:13:28.311 fused_ordering(482) 00:13:28.311 fused_ordering(483) 00:13:28.311 fused_ordering(484) 00:13:28.311 fused_ordering(485) 00:13:28.311 fused_ordering(486) 00:13:28.311 fused_ordering(487) 00:13:28.311 fused_ordering(488) 00:13:28.311 fused_ordering(489) 00:13:28.311 fused_ordering(490) 00:13:28.311 fused_ordering(491) 00:13:28.311 fused_ordering(492) 00:13:28.311 fused_ordering(493) 00:13:28.311 fused_ordering(494) 00:13:28.311 fused_ordering(495) 00:13:28.311 fused_ordering(496) 00:13:28.311 fused_ordering(497) 00:13:28.311 fused_ordering(498) 00:13:28.311 fused_ordering(499) 00:13:28.311 fused_ordering(500) 00:13:28.311 fused_ordering(501) 00:13:28.311 fused_ordering(502) 00:13:28.311 fused_ordering(503) 00:13:28.311 fused_ordering(504) 00:13:28.311 fused_ordering(505) 00:13:28.311 fused_ordering(506) 00:13:28.311 fused_ordering(507) 00:13:28.312 fused_ordering(508) 00:13:28.312 fused_ordering(509) 00:13:28.312 fused_ordering(510) 00:13:28.312 fused_ordering(511) 00:13:28.312 fused_ordering(512) 00:13:28.312 fused_ordering(513) 00:13:28.312 fused_ordering(514) 00:13:28.312 fused_ordering(515) 00:13:28.312 fused_ordering(516) 00:13:28.312 fused_ordering(517) 00:13:28.312 fused_ordering(518) 00:13:28.312 fused_ordering(519) 00:13:28.312 fused_ordering(520) 00:13:28.312 fused_ordering(521) 00:13:28.312 fused_ordering(522) 00:13:28.312 fused_ordering(523) 00:13:28.312 fused_ordering(524) 00:13:28.312 fused_ordering(525) 00:13:28.312 fused_ordering(526) 00:13:28.312 fused_ordering(527) 00:13:28.312 fused_ordering(528) 00:13:28.312 fused_ordering(529) 00:13:28.312 fused_ordering(530) 00:13:28.312 fused_ordering(531) 00:13:28.312 fused_ordering(532) 00:13:28.312 fused_ordering(533) 00:13:28.312 fused_ordering(534) 00:13:28.312 fused_ordering(535) 00:13:28.312 fused_ordering(536) 00:13:28.312 fused_ordering(537) 00:13:28.312 fused_ordering(538) 00:13:28.312 fused_ordering(539) 00:13:28.312 fused_ordering(540) 00:13:28.312 fused_ordering(541) 00:13:28.312 fused_ordering(542) 00:13:28.312 fused_ordering(543) 00:13:28.312 fused_ordering(544) 00:13:28.312 fused_ordering(545) 00:13:28.312 fused_ordering(546) 00:13:28.312 fused_ordering(547) 00:13:28.312 fused_ordering(548) 00:13:28.312 fused_ordering(549) 00:13:28.312 fused_ordering(550) 00:13:28.312 fused_ordering(551) 00:13:28.312 fused_ordering(552) 00:13:28.312 fused_ordering(553) 00:13:28.312 fused_ordering(554) 00:13:28.312 fused_ordering(555) 00:13:28.312 fused_ordering(556) 00:13:28.312 fused_ordering(557) 00:13:28.312 fused_ordering(558) 00:13:28.312 fused_ordering(559) 00:13:28.312 fused_ordering(560) 00:13:28.312 fused_ordering(561) 00:13:28.312 fused_ordering(562) 00:13:28.312 fused_ordering(563) 00:13:28.312 fused_ordering(564) 00:13:28.312 fused_ordering(565) 00:13:28.312 fused_ordering(566) 00:13:28.312 fused_ordering(567) 00:13:28.312 fused_ordering(568) 00:13:28.312 fused_ordering(569) 00:13:28.312 fused_ordering(570) 00:13:28.312 fused_ordering(571) 00:13:28.312 fused_ordering(572) 00:13:28.312 fused_ordering(573) 00:13:28.312 fused_ordering(574) 00:13:28.312 fused_ordering(575) 00:13:28.312 fused_ordering(576) 00:13:28.312 fused_ordering(577) 00:13:28.312 fused_ordering(578) 00:13:28.312 fused_ordering(579) 00:13:28.312 fused_ordering(580) 00:13:28.312 fused_ordering(581) 00:13:28.312 fused_ordering(582) 00:13:28.312 fused_ordering(583) 00:13:28.312 fused_ordering(584) 00:13:28.312 fused_ordering(585) 00:13:28.312 fused_ordering(586) 00:13:28.312 fused_ordering(587) 00:13:28.312 fused_ordering(588) 00:13:28.312 fused_ordering(589) 00:13:28.312 fused_ordering(590) 00:13:28.312 fused_ordering(591) 00:13:28.312 fused_ordering(592) 00:13:28.312 fused_ordering(593) 00:13:28.312 fused_ordering(594) 00:13:28.312 fused_ordering(595) 00:13:28.312 fused_ordering(596) 00:13:28.312 fused_ordering(597) 00:13:28.312 fused_ordering(598) 00:13:28.312 fused_ordering(599) 00:13:28.312 fused_ordering(600) 00:13:28.312 fused_ordering(601) 00:13:28.312 fused_ordering(602) 00:13:28.312 fused_ordering(603) 00:13:28.312 fused_ordering(604) 00:13:28.312 fused_ordering(605) 00:13:28.312 fused_ordering(606) 00:13:28.312 fused_ordering(607) 00:13:28.312 fused_ordering(608) 00:13:28.312 fused_ordering(609) 00:13:28.312 fused_ordering(610) 00:13:28.312 fused_ordering(611) 00:13:28.312 fused_ordering(612) 00:13:28.312 fused_ordering(613) 00:13:28.312 fused_ordering(614) 00:13:28.312 fused_ordering(615) 00:13:28.671 fused_ordering(616) 00:13:28.671 fused_ordering(617) 00:13:28.671 fused_ordering(618) 00:13:28.671 fused_ordering(619) 00:13:28.671 fused_ordering(620) 00:13:28.671 fused_ordering(621) 00:13:28.671 fused_ordering(622) 00:13:28.671 fused_ordering(623) 00:13:28.671 fused_ordering(624) 00:13:28.671 fused_ordering(625) 00:13:28.671 fused_ordering(626) 00:13:28.671 fused_ordering(627) 00:13:28.671 fused_ordering(628) 00:13:28.671 fused_ordering(629) 00:13:28.671 fused_ordering(630) 00:13:28.671 fused_ordering(631) 00:13:28.671 fused_ordering(632) 00:13:28.671 fused_ordering(633) 00:13:28.671 fused_ordering(634) 00:13:28.671 fused_ordering(635) 00:13:28.671 fused_ordering(636) 00:13:28.671 fused_ordering(637) 00:13:28.671 fused_ordering(638) 00:13:28.671 fused_ordering(639) 00:13:28.671 fused_ordering(640) 00:13:28.671 fused_ordering(641) 00:13:28.671 fused_ordering(642) 00:13:28.671 fused_ordering(643) 00:13:28.671 fused_ordering(644) 00:13:28.671 fused_ordering(645) 00:13:28.671 fused_ordering(646) 00:13:28.671 fused_ordering(647) 00:13:28.671 fused_ordering(648) 00:13:28.671 fused_ordering(649) 00:13:28.671 fused_ordering(650) 00:13:28.671 fused_ordering(651) 00:13:28.671 fused_ordering(652) 00:13:28.671 fused_ordering(653) 00:13:28.671 fused_ordering(654) 00:13:28.671 fused_ordering(655) 00:13:28.671 fused_ordering(656) 00:13:28.671 fused_ordering(657) 00:13:28.671 fused_ordering(658) 00:13:28.671 fused_ordering(659) 00:13:28.671 fused_ordering(660) 00:13:28.671 fused_ordering(661) 00:13:28.671 fused_ordering(662) 00:13:28.671 fused_ordering(663) 00:13:28.671 fused_ordering(664) 00:13:28.671 fused_ordering(665) 00:13:28.671 fused_ordering(666) 00:13:28.671 fused_ordering(667) 00:13:28.671 fused_ordering(668) 00:13:28.671 fused_ordering(669) 00:13:28.671 fused_ordering(670) 00:13:28.671 fused_ordering(671) 00:13:28.671 fused_ordering(672) 00:13:28.671 fused_ordering(673) 00:13:28.671 fused_ordering(674) 00:13:28.671 fused_ordering(675) 00:13:28.671 fused_ordering(676) 00:13:28.671 fused_ordering(677) 00:13:28.671 fused_ordering(678) 00:13:28.671 fused_ordering(679) 00:13:28.671 fused_ordering(680) 00:13:28.671 fused_ordering(681) 00:13:28.671 fused_ordering(682) 00:13:28.671 fused_ordering(683) 00:13:28.671 fused_ordering(684) 00:13:28.671 fused_ordering(685) 00:13:28.671 fused_ordering(686) 00:13:28.671 fused_ordering(687) 00:13:28.671 fused_ordering(688) 00:13:28.671 fused_ordering(689) 00:13:28.671 fused_ordering(690) 00:13:28.671 fused_ordering(691) 00:13:28.671 fused_ordering(692) 00:13:28.671 fused_ordering(693) 00:13:28.671 fused_ordering(694) 00:13:28.671 fused_ordering(695) 00:13:28.671 fused_ordering(696) 00:13:28.671 fused_ordering(697) 00:13:28.671 fused_ordering(698) 00:13:28.671 fused_ordering(699) 00:13:28.671 fused_ordering(700) 00:13:28.671 fused_ordering(701) 00:13:28.671 fused_ordering(702) 00:13:28.671 fused_ordering(703) 00:13:28.671 fused_ordering(704) 00:13:28.671 fused_ordering(705) 00:13:28.671 fused_ordering(706) 00:13:28.671 fused_ordering(707) 00:13:28.671 fused_ordering(708) 00:13:28.671 fused_ordering(709) 00:13:28.671 fused_ordering(710) 00:13:28.671 fused_ordering(711) 00:13:28.671 fused_ordering(712) 00:13:28.671 fused_ordering(713) 00:13:28.671 fused_ordering(714) 00:13:28.671 fused_ordering(715) 00:13:28.671 fused_ordering(716) 00:13:28.671 fused_ordering(717) 00:13:28.671 fused_ordering(718) 00:13:28.671 fused_ordering(719) 00:13:28.671 fused_ordering(720) 00:13:28.671 fused_ordering(721) 00:13:28.671 fused_ordering(722) 00:13:28.671 fused_ordering(723) 00:13:28.671 fused_ordering(724) 00:13:28.671 fused_ordering(725) 00:13:28.671 fused_ordering(726) 00:13:28.671 fused_ordering(727) 00:13:28.671 fused_ordering(728) 00:13:28.671 fused_ordering(729) 00:13:28.671 fused_ordering(730) 00:13:28.671 fused_ordering(731) 00:13:28.671 fused_ordering(732) 00:13:28.671 fused_ordering(733) 00:13:28.671 fused_ordering(734) 00:13:28.671 fused_ordering(735) 00:13:28.671 fused_ordering(736) 00:13:28.671 fused_ordering(737) 00:13:28.671 fused_ordering(738) 00:13:28.671 fused_ordering(739) 00:13:28.671 fused_ordering(740) 00:13:28.671 fused_ordering(741) 00:13:28.671 fused_ordering(742) 00:13:28.671 fused_ordering(743) 00:13:28.671 fused_ordering(744) 00:13:28.671 fused_ordering(745) 00:13:28.671 fused_ordering(746) 00:13:28.671 fused_ordering(747) 00:13:28.671 fused_ordering(748) 00:13:28.671 fused_ordering(749) 00:13:28.671 fused_ordering(750) 00:13:28.672 fused_ordering(751) 00:13:28.672 fused_ordering(752) 00:13:28.672 fused_ordering(753) 00:13:28.672 fused_ordering(754) 00:13:28.672 fused_ordering(755) 00:13:28.672 fused_ordering(756) 00:13:28.672 fused_ordering(757) 00:13:28.672 fused_ordering(758) 00:13:28.672 fused_ordering(759) 00:13:28.672 fused_ordering(760) 00:13:28.672 fused_ordering(761) 00:13:28.672 fused_ordering(762) 00:13:28.672 fused_ordering(763) 00:13:28.672 fused_ordering(764) 00:13:28.672 fused_ordering(765) 00:13:28.672 fused_ordering(766) 00:13:28.672 fused_ordering(767) 00:13:28.672 fused_ordering(768) 00:13:28.672 fused_ordering(769) 00:13:28.672 fused_ordering(770) 00:13:28.672 fused_ordering(771) 00:13:28.672 fused_ordering(772) 00:13:28.672 fused_ordering(773) 00:13:28.672 fused_ordering(774) 00:13:28.672 fused_ordering(775) 00:13:28.672 fused_ordering(776) 00:13:28.672 fused_ordering(777) 00:13:28.672 fused_ordering(778) 00:13:28.672 fused_ordering(779) 00:13:28.672 fused_ordering(780) 00:13:28.672 fused_ordering(781) 00:13:28.672 fused_ordering(782) 00:13:28.672 fused_ordering(783) 00:13:28.672 fused_ordering(784) 00:13:28.672 fused_ordering(785) 00:13:28.672 fused_ordering(786) 00:13:28.672 fused_ordering(787) 00:13:28.672 fused_ordering(788) 00:13:28.672 fused_ordering(789) 00:13:28.672 fused_ordering(790) 00:13:28.672 fused_ordering(791) 00:13:28.672 fused_ordering(792) 00:13:28.672 fused_ordering(793) 00:13:28.672 fused_ordering(794) 00:13:28.672 fused_ordering(795) 00:13:28.672 fused_ordering(796) 00:13:28.672 fused_ordering(797) 00:13:28.672 fused_ordering(798) 00:13:28.672 fused_ordering(799) 00:13:28.672 fused_ordering(800) 00:13:28.672 fused_ordering(801) 00:13:28.672 fused_ordering(802) 00:13:28.672 fused_ordering(803) 00:13:28.672 fused_ordering(804) 00:13:28.672 fused_ordering(805) 00:13:28.672 fused_ordering(806) 00:13:28.672 fused_ordering(807) 00:13:28.672 fused_ordering(808) 00:13:28.672 fused_ordering(809) 00:13:28.672 fused_ordering(810) 00:13:28.672 fused_ordering(811) 00:13:28.672 fused_ordering(812) 00:13:28.672 fused_ordering(813) 00:13:28.672 fused_ordering(814) 00:13:28.672 fused_ordering(815) 00:13:28.672 fused_ordering(816) 00:13:28.672 fused_ordering(817) 00:13:28.672 fused_ordering(818) 00:13:28.672 fused_ordering(819) 00:13:28.672 fused_ordering(820) 00:13:28.672 fused_ordering(821) 00:13:28.672 fused_ordering(822) 00:13:28.672 fused_ordering(823) 00:13:28.672 fused_ordering(824) 00:13:28.672 fused_ordering(825) 00:13:28.672 fused_ordering(826) 00:13:28.672 fused_ordering(827) 00:13:28.672 fused_ordering(828) 00:13:28.672 fused_ordering(829) 00:13:28.672 fused_ordering(830) 00:13:28.672 fused_ordering(831) 00:13:28.672 fused_ordering(832) 00:13:28.672 fused_ordering(833) 00:13:28.672 fused_ordering(834) 00:13:28.672 fused_ordering(835) 00:13:28.672 fused_ordering(836) 00:13:28.672 fused_ordering(837) 00:13:28.672 fused_ordering(838) 00:13:28.672 fused_ordering(839) 00:13:28.672 fused_ordering(840) 00:13:28.672 fused_ordering(841) 00:13:28.672 fused_ordering(842) 00:13:28.672 fused_ordering(843) 00:13:28.672 fused_ordering(844) 00:13:28.672 fused_ordering(845) 00:13:28.672 fused_ordering(846) 00:13:28.672 fused_ordering(847) 00:13:28.672 fused_ordering(848) 00:13:28.672 fused_ordering(849) 00:13:28.672 fused_ordering(850) 00:13:28.672 fused_ordering(851) 00:13:28.672 fused_ordering(852) 00:13:28.672 fused_ordering(853) 00:13:28.672 fused_ordering(854) 00:13:28.672 fused_ordering(855) 00:13:28.672 fused_ordering(856) 00:13:28.672 fused_ordering(857) 00:13:28.672 fused_ordering(858) 00:13:28.672 fused_ordering(859) 00:13:28.672 fused_ordering(860) 00:13:28.672 fused_ordering(861) 00:13:28.672 fused_ordering(862) 00:13:28.672 fused_ordering(863) 00:13:28.672 fused_ordering(864) 00:13:28.672 fused_ordering(865) 00:13:28.672 fused_ordering(866) 00:13:28.672 fused_ordering(867) 00:13:28.672 fused_ordering(868) 00:13:28.672 fused_ordering(869) 00:13:28.672 fused_ordering(870) 00:13:28.672 fused_ordering(871) 00:13:28.672 fused_ordering(872) 00:13:28.672 fused_ordering(873) 00:13:28.672 fused_ordering(874) 00:13:28.672 fused_ordering(875) 00:13:28.672 fused_ordering(876) 00:13:28.672 fused_ordering(877) 00:13:28.672 fused_ordering(878) 00:13:28.672 fused_ordering(879) 00:13:28.672 fused_ordering(880) 00:13:28.672 fused_ordering(881) 00:13:28.672 fused_ordering(882) 00:13:28.672 fused_ordering(883) 00:13:28.672 fused_ordering(884) 00:13:28.672 fused_ordering(885) 00:13:28.672 fused_ordering(886) 00:13:28.672 fused_ordering(887) 00:13:28.672 fused_ordering(888) 00:13:28.672 fused_ordering(889) 00:13:28.672 fused_ordering(890) 00:13:28.672 fused_ordering(891) 00:13:28.672 fused_ordering(892) 00:13:28.672 fused_ordering(893) 00:13:28.672 fused_ordering(894) 00:13:28.672 fused_ordering(895) 00:13:28.672 fused_ordering(896) 00:13:28.672 fused_ordering(897) 00:13:28.672 fused_ordering(898) 00:13:28.672 fused_ordering(899) 00:13:28.672 fused_ordering(900) 00:13:28.672 fused_ordering(901) 00:13:28.672 fused_ordering(902) 00:13:28.672 fused_ordering(903) 00:13:28.672 fused_ordering(904) 00:13:28.672 fused_ordering(905) 00:13:28.672 fused_ordering(906) 00:13:28.672 fused_ordering(907) 00:13:28.672 fused_ordering(908) 00:13:28.672 fused_ordering(909) 00:13:28.672 fused_ordering(910) 00:13:28.672 fused_ordering(911) 00:13:28.672 fused_ordering(912) 00:13:28.672 fused_ordering(913) 00:13:28.672 fused_ordering(914) 00:13:28.672 fused_ordering(915) 00:13:28.672 fused_ordering(916) 00:13:28.672 fused_ordering(917) 00:13:28.672 fused_ordering(918) 00:13:28.672 fused_ordering(919) 00:13:28.672 fused_ordering(920) 00:13:28.672 fused_ordering(921) 00:13:28.672 fused_ordering(922) 00:13:28.672 fused_ordering(923) 00:13:28.672 fused_ordering(924) 00:13:28.672 fused_ordering(925) 00:13:28.672 fused_ordering(926) 00:13:28.672 fused_ordering(927) 00:13:28.672 fused_ordering(928) 00:13:28.672 fused_ordering(929) 00:13:28.672 fused_ordering(930) 00:13:28.672 fused_ordering(931) 00:13:28.672 fused_ordering(932) 00:13:28.672 fused_ordering(933) 00:13:28.672 fused_ordering(934) 00:13:28.672 fused_ordering(935) 00:13:28.672 fused_ordering(936) 00:13:28.672 fused_ordering(937) 00:13:28.672 fused_ordering(938) 00:13:28.672 fused_ordering(939) 00:13:28.672 fused_ordering(940) 00:13:28.672 fused_ordering(941) 00:13:28.672 fused_ordering(942) 00:13:28.672 fused_ordering(943) 00:13:28.672 fused_ordering(944) 00:13:28.672 fused_ordering(945) 00:13:28.672 fused_ordering(946) 00:13:28.672 fused_ordering(947) 00:13:28.672 fused_ordering(948) 00:13:28.672 fused_ordering(949) 00:13:28.672 fused_ordering(950) 00:13:28.672 fused_ordering(951) 00:13:28.672 fused_ordering(952) 00:13:28.672 fused_ordering(953) 00:13:28.672 fused_ordering(954) 00:13:28.672 fused_ordering(955) 00:13:28.672 fused_ordering(956) 00:13:28.672 fused_ordering(957) 00:13:28.672 fused_ordering(958) 00:13:28.672 fused_ordering(959) 00:13:28.672 fused_ordering(960) 00:13:28.672 fused_ordering(961) 00:13:28.672 fused_ordering(962) 00:13:28.672 fused_ordering(963) 00:13:28.672 fused_ordering(964) 00:13:28.672 fused_ordering(965) 00:13:28.672 fused_ordering(966) 00:13:28.672 fused_ordering(967) 00:13:28.672 fused_ordering(968) 00:13:28.672 fused_ordering(969) 00:13:28.672 fused_ordering(970) 00:13:28.672 fused_ordering(971) 00:13:28.672 fused_ordering(972) 00:13:28.672 fused_ordering(973) 00:13:28.672 fused_ordering(974) 00:13:28.672 fused_ordering(975) 00:13:28.672 fused_ordering(976) 00:13:28.672 fused_ordering(977) 00:13:28.672 fused_ordering(978) 00:13:28.672 fused_ordering(979) 00:13:28.672 fused_ordering(980) 00:13:28.672 fused_ordering(981) 00:13:28.672 fused_ordering(982) 00:13:28.672 fused_ordering(983) 00:13:28.672 fused_ordering(984) 00:13:28.672 fused_ordering(985) 00:13:28.672 fused_ordering(986) 00:13:28.672 fused_ordering(987) 00:13:28.672 fused_ordering(988) 00:13:28.672 fused_ordering(989) 00:13:28.672 fused_ordering(990) 00:13:28.672 fused_ordering(991) 00:13:28.672 fused_ordering(992) 00:13:28.672 fused_ordering(993) 00:13:28.672 fused_ordering(994) 00:13:28.672 fused_ordering(995) 00:13:28.672 fused_ordering(996) 00:13:28.672 fused_ordering(997) 00:13:28.672 fused_ordering(998) 00:13:28.672 fused_ordering(999) 00:13:28.672 fused_ordering(1000) 00:13:28.672 fused_ordering(1001) 00:13:28.672 fused_ordering(1002) 00:13:28.672 fused_ordering(1003) 00:13:28.672 fused_ordering(1004) 00:13:28.672 fused_ordering(1005) 00:13:28.672 fused_ordering(1006) 00:13:28.672 fused_ordering(1007) 00:13:28.672 fused_ordering(1008) 00:13:28.672 fused_ordering(1009) 00:13:28.672 fused_ordering(1010) 00:13:28.672 fused_ordering(1011) 00:13:28.672 fused_ordering(1012) 00:13:28.672 fused_ordering(1013) 00:13:28.672 fused_ordering(1014) 00:13:28.672 fused_ordering(1015) 00:13:28.672 fused_ordering(1016) 00:13:28.672 fused_ordering(1017) 00:13:28.672 fused_ordering(1018) 00:13:28.672 fused_ordering(1019) 00:13:28.672 fused_ordering(1020) 00:13:28.672 fused_ordering(1021) 00:13:28.672 fused_ordering(1022) 00:13:28.672 fused_ordering(1023) 00:13:28.672 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:28.673 rmmod nvme_rdma 00:13:28.673 rmmod nvme_fabrics 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 4126656 ']' 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 4126656 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 4126656 ']' 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 4126656 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4126656 00:13:28.673 12:51:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:28.673 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:28.673 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4126656' 00:13:28.673 killing process with pid 4126656 00:13:28.673 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 4126656 00:13:28.673 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 4126656 00:13:28.957 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:28.957 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:28.957 00:13:28.957 real 0m11.111s 00:13:28.957 user 0m5.427s 00:13:28.957 sys 0m7.169s 00:13:28.957 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.957 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.957 ************************************ 00:13:28.957 END TEST nvmf_fused_ordering 00:13:28.957 ************************************ 00:13:28.957 12:51:55 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:13:28.957 12:51:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:28.957 12:51:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.957 12:51:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:28.957 ************************************ 00:13:28.957 START TEST nvmf_ns_masking 00:13:28.957 ************************************ 00:13:28.957 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:13:29.218 * Looking for test storage... 00:13:29.218 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:29.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.218 --rc genhtml_branch_coverage=1 00:13:29.218 --rc genhtml_function_coverage=1 00:13:29.218 --rc genhtml_legend=1 00:13:29.218 --rc geninfo_all_blocks=1 00:13:29.218 --rc geninfo_unexecuted_blocks=1 00:13:29.218 00:13:29.218 ' 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:29.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.218 --rc genhtml_branch_coverage=1 00:13:29.218 --rc genhtml_function_coverage=1 00:13:29.218 --rc genhtml_legend=1 00:13:29.218 --rc geninfo_all_blocks=1 00:13:29.218 --rc geninfo_unexecuted_blocks=1 00:13:29.218 00:13:29.218 ' 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:29.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.218 --rc genhtml_branch_coverage=1 00:13:29.218 --rc genhtml_function_coverage=1 00:13:29.218 --rc genhtml_legend=1 00:13:29.218 --rc geninfo_all_blocks=1 00:13:29.218 --rc geninfo_unexecuted_blocks=1 00:13:29.218 00:13:29.218 ' 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:29.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.218 --rc genhtml_branch_coverage=1 00:13:29.218 --rc genhtml_function_coverage=1 00:13:29.218 --rc genhtml_legend=1 00:13:29.218 --rc geninfo_all_blocks=1 00:13:29.218 --rc geninfo_unexecuted_blocks=1 00:13:29.218 00:13:29.218 ' 00:13:29.218 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:29.219 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c2b3aa45-6c4f-4044-963e-79727ebb39dc 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=26dbb045-cdbd-4f9e-9ec0-d68fbafe5344 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5c5f7fd9-4cda-4249-b8ca-f17d97b11441 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:29.219 12:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:39.203 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:39.203 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:39.203 12:52:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:39.203 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:39.203 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:39.203 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:39.204 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:39.204 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:39.204 altname enp217s0f0np0 00:13:39.204 altname ens818f0np0 00:13:39.204 inet 192.168.100.8/24 scope global mlx_0_0 00:13:39.204 valid_lft forever preferred_lft forever 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:39.204 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:39.204 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:39.204 altname enp217s0f1np1 00:13:39.204 altname ens818f1np1 00:13:39.204 inet 192.168.100.9/24 scope global mlx_0_1 00:13:39.204 valid_lft forever preferred_lft forever 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:39.204 192.168.100.9' 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:39.204 192.168.100.9' 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:39.204 192.168.100.9' 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=4131265 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 4131265 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 4131265 ']' 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.204 12:52:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:39.204 [2024-11-27 12:52:04.312844] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:13:39.204 [2024-11-27 12:52:04.312893] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.204 [2024-11-27 12:52:04.400969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.204 [2024-11-27 12:52:04.439706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.205 [2024-11-27 12:52:04.439744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.205 [2024-11-27 12:52:04.439754] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.205 [2024-11-27 12:52:04.439762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.205 [2024-11-27 12:52:04.439770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.205 [2024-11-27 12:52:04.440357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.205 12:52:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.205 12:52:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:39.205 12:52:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:39.205 12:52:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:39.205 12:52:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:39.205 12:52:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.205 12:52:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:39.205 [2024-11-27 12:52:05.369448] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ff7b80/0x1ffc070) succeed. 00:13:39.205 [2024-11-27 12:52:05.378287] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ff9030/0x203d710) succeed. 00:13:39.205 12:52:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:39.205 12:52:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:39.205 12:52:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:39.463 Malloc1 00:13:39.463 12:52:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:39.463 Malloc2 00:13:39.721 12:52:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:39.721 12:52:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:39.979 12:52:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:40.238 [2024-11-27 12:52:06.393032] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:40.238 12:52:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:40.238 12:52:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5c5f7fd9-4cda-4249-b8ca-f17d97b11441 -a 192.168.100.8 -s 4420 -i 4 00:13:40.496 12:52:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:40.496 12:52:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:40.496 12:52:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.496 12:52:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:40.496 12:52:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:42.402 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:42.402 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:42.402 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:42.402 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:42.402 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:42.402 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:42.402 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:42.402 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:42.660 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:42.660 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:42.660 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:42.660 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.660 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:42.660 [ 0]:0x1 00:13:42.660 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:42.660 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.660 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=51111668f4324253a01a91516e48ba61 00:13:42.660 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 51111668f4324253a01a91516e48ba61 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.660 12:52:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:42.660 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:42.660 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.660 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:42.660 [ 0]:0x1 00:13:42.660 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:42.660 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.919 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=51111668f4324253a01a91516e48ba61 00:13:42.919 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 51111668f4324253a01a91516e48ba61 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.919 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:42.919 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.919 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:42.919 [ 1]:0x2 00:13:42.919 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:42.919 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.919 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8c060c4b86ef4e5ab0ae39357d02c619 00:13:42.919 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8c060c4b86ef4e5ab0ae39357d02c619 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.919 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:42.919 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.178 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.436 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:43.695 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:43.695 12:52:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5c5f7fd9-4cda-4249-b8ca-f17d97b11441 -a 192.168.100.8 -s 4420 -i 4 00:13:43.953 12:52:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:43.953 12:52:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:43.953 12:52:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:43.953 12:52:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:43.953 12:52:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:43.953 12:52:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:45.859 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.118 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:46.118 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.118 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:46.118 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:46.118 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:46.118 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:46.118 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:46.118 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.118 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:46.118 [ 0]:0x2 00:13:46.118 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:46.118 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.118 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8c060c4b86ef4e5ab0ae39357d02c619 00:13:46.118 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8c060c4b86ef4e5ab0ae39357d02c619 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.118 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:46.377 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:46.377 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:46.377 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.377 [ 0]:0x1 00:13:46.377 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:46.377 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.377 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=51111668f4324253a01a91516e48ba61 00:13:46.377 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 51111668f4324253a01a91516e48ba61 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.377 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:46.377 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.377 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:46.377 [ 1]:0x2 00:13:46.377 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:46.377 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.377 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8c060c4b86ef4e5ab0ae39357d02c619 00:13:46.377 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8c060c4b86ef4e5ab0ae39357d02c619 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.377 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:46.636 [ 0]:0x2 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8c060c4b86ef4e5ab0ae39357d02c619 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8c060c4b86ef4e5ab0ae39357d02c619 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:46.636 12:52:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:46.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.896 12:52:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:47.155 12:52:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:47.155 12:52:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5c5f7fd9-4cda-4249-b8ca-f17d97b11441 -a 192.168.100.8 -s 4420 -i 4 00:13:47.414 12:52:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:47.414 12:52:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:47.414 12:52:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:47.414 12:52:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:47.414 12:52:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:47.414 12:52:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:49.973 [ 0]:0x1 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=51111668f4324253a01a91516e48ba61 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 51111668f4324253a01a91516e48ba61 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.973 [ 1]:0x2 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8c060c4b86ef4e5ab0ae39357d02c619 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8c060c4b86ef4e5ab0ae39357d02c619 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.973 12:52:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:49.973 [ 0]:0x2 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8c060c4b86ef4e5ab0ae39357d02c619 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8c060c4b86ef4e5ab0ae39357d02c619 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:49.973 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.974 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:49.974 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:49.974 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:50.232 [2024-11-27 12:52:16.366710] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:50.232 request: 00:13:50.232 { 00:13:50.232 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:50.232 "nsid": 2, 00:13:50.232 "host": "nqn.2016-06.io.spdk:host1", 00:13:50.232 "method": "nvmf_ns_remove_host", 00:13:50.232 "req_id": 1 00:13:50.232 } 00:13:50.232 Got JSON-RPC error response 00:13:50.232 response: 00:13:50.232 { 00:13:50.232 "code": -32602, 00:13:50.232 "message": "Invalid parameters" 00:13:50.232 } 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:50.232 [ 0]:0x2 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8c060c4b86ef4e5ab0ae39357d02c619 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8c060c4b86ef4e5ab0ae39357d02c619 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:50.232 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:50.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.490 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=4133563 00:13:50.490 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:50.490 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.491 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 4133563 /var/tmp/host.sock 00:13:50.491 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 4133563 ']' 00:13:50.491 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:50.491 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.491 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:50.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:50.491 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.491 12:52:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:50.491 [2024-11-27 12:52:16.866730] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:13:50.491 [2024-11-27 12:52:16.866782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4133563 ] 00:13:50.749 [2024-11-27 12:52:16.953661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.749 [2024-11-27 12:52:16.993204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.317 12:52:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.317 12:52:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:51.317 12:52:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.576 12:52:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.835 12:52:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c2b3aa45-6c4f-4044-963e-79727ebb39dc 00:13:51.835 12:52:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:51.835 12:52:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C2B3AA456C4F4044963E79727EBB39DC -i 00:13:52.093 12:52:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 26dbb045-cdbd-4f9e-9ec0-d68fbafe5344 00:13:52.093 12:52:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:52.093 12:52:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 26DBB045CDBD4F9E9EC0D68FBAFE5344 -i 00:13:52.093 12:52:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:52.352 12:52:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:52.611 12:52:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:52.611 12:52:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:52.870 nvme0n1 00:13:52.870 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:52.870 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:53.129 nvme1n2 00:13:53.129 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:53.129 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:53.129 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:53.129 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:53.129 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:53.129 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:53.129 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:53.129 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:53.129 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:53.387 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c2b3aa45-6c4f-4044-963e-79727ebb39dc == \c\2\b\3\a\a\4\5\-\6\c\4\f\-\4\0\4\4\-\9\6\3\e\-\7\9\7\2\7\e\b\b\3\9\d\c ]] 00:13:53.387 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:53.387 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:53.387 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:53.647 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 26dbb045-cdbd-4f9e-9ec0-d68fbafe5344 == \2\6\d\b\b\0\4\5\-\c\d\b\d\-\4\f\9\e\-\9\e\c\0\-\d\6\8\f\b\a\f\e\5\3\4\4 ]] 00:13:53.647 12:52:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.907 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:53.907 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid c2b3aa45-6c4f-4044-963e-79727ebb39dc 00:13:53.907 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:53.907 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C2B3AA456C4F4044963E79727EBB39DC 00:13:53.907 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:53.907 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C2B3AA456C4F4044963E79727EBB39DC 00:13:53.907 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:54.165 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:54.165 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:54.165 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:54.165 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:54.166 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:54.166 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:54.166 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:54.166 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C2B3AA456C4F4044963E79727EBB39DC 00:13:54.166 [2024-11-27 12:52:20.464791] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:54.166 [2024-11-27 12:52:20.464832] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:54.166 [2024-11-27 12:52:20.464843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.166 request: 00:13:54.166 { 00:13:54.166 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.166 "namespace": { 00:13:54.166 "bdev_name": "invalid", 00:13:54.166 "nsid": 1, 00:13:54.166 "nguid": "C2B3AA456C4F4044963E79727EBB39DC", 00:13:54.166 "no_auto_visible": false, 00:13:54.166 "hide_metadata": false 00:13:54.166 }, 00:13:54.166 "method": "nvmf_subsystem_add_ns", 00:13:54.166 "req_id": 1 00:13:54.166 } 00:13:54.166 Got JSON-RPC error response 00:13:54.166 response: 00:13:54.166 { 00:13:54.166 "code": -32602, 00:13:54.166 "message": "Invalid parameters" 00:13:54.166 } 00:13:54.166 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:54.166 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:54.166 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:54.166 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:54.166 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid c2b3aa45-6c4f-4044-963e-79727ebb39dc 00:13:54.166 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:54.166 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C2B3AA456C4F4044963E79727EBB39DC -i 00:13:54.424 12:52:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:56.329 12:52:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:56.329 12:52:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:56.329 12:52:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:56.588 12:52:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:56.588 12:52:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 4133563 00:13:56.588 12:52:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 4133563 ']' 00:13:56.588 12:52:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 4133563 00:13:56.588 12:52:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:56.588 12:52:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.588 12:52:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4133563 00:13:56.588 12:52:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:56.588 12:52:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:56.588 12:52:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4133563' 00:13:56.588 killing process with pid 4133563 00:13:56.588 12:52:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 4133563 00:13:56.588 12:52:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 4133563 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:57.156 rmmod nvme_rdma 00:13:57.156 rmmod nvme_fabrics 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 4131265 ']' 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 4131265 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 4131265 ']' 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 4131265 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.156 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4131265 00:13:57.415 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:57.415 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:57.415 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4131265' 00:13:57.415 killing process with pid 4131265 00:13:57.415 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 4131265 00:13:57.415 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 4131265 00:13:57.673 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:57.673 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:57.673 00:13:57.673 real 0m28.535s 00:13:57.673 user 0m34.115s 00:13:57.673 sys 0m9.133s 00:13:57.673 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.673 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:57.673 ************************************ 00:13:57.673 END TEST nvmf_ns_masking 00:13:57.673 ************************************ 00:13:57.673 12:52:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:57.673 12:52:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:13:57.673 12:52:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:57.673 12:52:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.673 12:52:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:57.673 ************************************ 00:13:57.673 START TEST nvmf_nvme_cli 00:13:57.673 ************************************ 00:13:57.673 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:13:57.673 * Looking for test storage... 00:13:57.673 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:57.673 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:57.673 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:57.673 12:52:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:57.933 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:57.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.934 --rc genhtml_branch_coverage=1 00:13:57.934 --rc genhtml_function_coverage=1 00:13:57.934 --rc genhtml_legend=1 00:13:57.934 --rc geninfo_all_blocks=1 00:13:57.934 --rc geninfo_unexecuted_blocks=1 00:13:57.934 00:13:57.934 ' 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:57.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.934 --rc genhtml_branch_coverage=1 00:13:57.934 --rc genhtml_function_coverage=1 00:13:57.934 --rc genhtml_legend=1 00:13:57.934 --rc geninfo_all_blocks=1 00:13:57.934 --rc geninfo_unexecuted_blocks=1 00:13:57.934 00:13:57.934 ' 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:57.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.934 --rc genhtml_branch_coverage=1 00:13:57.934 --rc genhtml_function_coverage=1 00:13:57.934 --rc genhtml_legend=1 00:13:57.934 --rc geninfo_all_blocks=1 00:13:57.934 --rc geninfo_unexecuted_blocks=1 00:13:57.934 00:13:57.934 ' 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:57.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.934 --rc genhtml_branch_coverage=1 00:13:57.934 --rc genhtml_function_coverage=1 00:13:57.934 --rc genhtml_legend=1 00:13:57.934 --rc geninfo_all_blocks=1 00:13:57.934 --rc geninfo_unexecuted_blocks=1 00:13:57.934 00:13:57.934 ' 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:57.934 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:57.934 12:52:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:06.054 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:06.055 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:06.055 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:06.055 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:06.055 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:06.055 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:06.055 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:06.055 altname enp217s0f0np0 00:14:06.055 altname ens818f0np0 00:14:06.055 inet 192.168.100.8/24 scope global mlx_0_0 00:14:06.055 valid_lft forever preferred_lft forever 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:06.055 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:06.055 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:06.055 altname enp217s0f1np1 00:14:06.055 altname ens818f1np1 00:14:06.055 inet 192.168.100.9/24 scope global mlx_0_1 00:14:06.055 valid_lft forever preferred_lft forever 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:06.055 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:06.056 192.168.100.9' 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:06.056 192.168.100.9' 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:06.056 192.168.100.9' 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:06.056 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:06.315 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:06.315 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:06.315 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:06.315 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:06.315 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=4138864 00:14:06.315 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:06.315 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 4138864 00:14:06.315 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 4138864 ']' 00:14:06.315 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.315 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:06.315 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.315 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:06.316 12:52:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:06.316 [2024-11-27 12:52:32.517407] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:14:06.316 [2024-11-27 12:52:32.517461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.316 [2024-11-27 12:52:32.606595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.316 [2024-11-27 12:52:32.648133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.316 [2024-11-27 12:52:32.648173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.316 [2024-11-27 12:52:32.648182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.316 [2024-11-27 12:52:32.648191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.316 [2024-11-27 12:52:32.648199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.316 [2024-11-27 12:52:32.649906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.316 [2024-11-27 12:52:32.650000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.316 [2024-11-27 12:52:32.650065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.316 [2024-11-27 12:52:32.650067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.253 [2024-11-27 12:52:33.420200] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xcfedf0/0xd032e0) succeed. 00:14:07.253 [2024-11-27 12:52:33.429274] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd00480/0xd44980) succeed. 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.253 Malloc0 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.253 Malloc1 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.253 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.513 [2024-11-27 12:52:33.639621] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:14:07.513 00:14:07.513 Discovery Log Number of Records 2, Generation counter 2 00:14:07.513 =====Discovery Log Entry 0====== 00:14:07.513 trtype: rdma 00:14:07.513 adrfam: ipv4 00:14:07.513 subtype: current discovery subsystem 00:14:07.513 treq: not required 00:14:07.513 portid: 0 00:14:07.513 trsvcid: 4420 00:14:07.513 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:07.513 traddr: 192.168.100.8 00:14:07.513 eflags: explicit discovery connections, duplicate discovery information 00:14:07.513 rdma_prtype: not specified 00:14:07.513 rdma_qptype: connected 00:14:07.513 rdma_cms: rdma-cm 00:14:07.513 rdma_pkey: 0x0000 00:14:07.513 =====Discovery Log Entry 1====== 00:14:07.513 trtype: rdma 00:14:07.513 adrfam: ipv4 00:14:07.513 subtype: nvme subsystem 00:14:07.513 treq: not required 00:14:07.513 portid: 0 00:14:07.513 trsvcid: 4420 00:14:07.513 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:07.513 traddr: 192.168.100.8 00:14:07.513 eflags: none 00:14:07.513 rdma_prtype: not specified 00:14:07.513 rdma_qptype: connected 00:14:07.513 rdma_cms: rdma-cm 00:14:07.513 rdma_pkey: 0x0000 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:07.513 12:52:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:08.448 12:52:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:08.448 12:52:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:08.448 12:52:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.448 12:52:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:08.448 12:52:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:08.448 12:52:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:10.982 /dev/nvme0n2 ]] 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:10.982 12:52:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.550 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:11.551 rmmod nvme_rdma 00:14:11.551 rmmod nvme_fabrics 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 4138864 ']' 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 4138864 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 4138864 ']' 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 4138864 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.551 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4138864 00:14:11.810 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:11.810 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:11.810 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4138864' 00:14:11.810 killing process with pid 4138864 00:14:11.810 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 4138864 00:14:11.810 12:52:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 4138864 00:14:12.070 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:12.070 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:12.070 00:14:12.070 real 0m14.340s 00:14:12.070 user 0m24.645s 00:14:12.070 sys 0m7.047s 00:14:12.070 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.070 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:12.070 ************************************ 00:14:12.070 END TEST nvmf_nvme_cli 00:14:12.070 ************************************ 00:14:12.070 12:52:38 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:14:12.070 12:52:38 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:12.070 12:52:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:12.070 12:52:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.070 12:52:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:12.070 ************************************ 00:14:12.070 START TEST nvmf_auth_target 00:14:12.070 ************************************ 00:14:12.070 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:12.070 * Looking for test storage... 00:14:12.070 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:12.070 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:12.070 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:14:12.070 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:12.330 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:12.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.331 --rc genhtml_branch_coverage=1 00:14:12.331 --rc genhtml_function_coverage=1 00:14:12.331 --rc genhtml_legend=1 00:14:12.331 --rc geninfo_all_blocks=1 00:14:12.331 --rc geninfo_unexecuted_blocks=1 00:14:12.331 00:14:12.331 ' 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:12.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.331 --rc genhtml_branch_coverage=1 00:14:12.331 --rc genhtml_function_coverage=1 00:14:12.331 --rc genhtml_legend=1 00:14:12.331 --rc geninfo_all_blocks=1 00:14:12.331 --rc geninfo_unexecuted_blocks=1 00:14:12.331 00:14:12.331 ' 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:12.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.331 --rc genhtml_branch_coverage=1 00:14:12.331 --rc genhtml_function_coverage=1 00:14:12.331 --rc genhtml_legend=1 00:14:12.331 --rc geninfo_all_blocks=1 00:14:12.331 --rc geninfo_unexecuted_blocks=1 00:14:12.331 00:14:12.331 ' 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:12.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.331 --rc genhtml_branch_coverage=1 00:14:12.331 --rc genhtml_function_coverage=1 00:14:12.331 --rc genhtml_legend=1 00:14:12.331 --rc geninfo_all_blocks=1 00:14:12.331 --rc geninfo_unexecuted_blocks=1 00:14:12.331 00:14:12.331 ' 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.331 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:12.332 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:12.332 12:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:20.459 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:20.459 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.459 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:20.459 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:20.460 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:20.460 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:20.460 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:20.460 altname enp217s0f0np0 00:14:20.460 altname ens818f0np0 00:14:20.460 inet 192.168.100.8/24 scope global mlx_0_0 00:14:20.460 valid_lft forever preferred_lft forever 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:20.460 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:20.460 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:20.460 altname enp217s0f1np1 00:14:20.460 altname ens818f1np1 00:14:20.460 inet 192.168.100.9/24 scope global mlx_0_1 00:14:20.460 valid_lft forever preferred_lft forever 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:20.460 192.168.100.9' 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:20.460 192.168.100.9' 00:14:20.460 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:20.461 192.168.100.9' 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=4143878 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 4143878 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4143878 ']' 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:20.461 12:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.029 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.029 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:21.029 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:21.029 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:21.029 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.029 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.030 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=4143915 00:14:21.030 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:21.030 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:21.030 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:21.030 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:21.030 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:21.030 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:21.030 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:21.030 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:21.030 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:21.030 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6469b1d0ae2049a0c8339bf1614d3a5ca8c1a5d8057c2db3 00:14:21.030 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.nu8 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6469b1d0ae2049a0c8339bf1614d3a5ca8c1a5d8057c2db3 0 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6469b1d0ae2049a0c8339bf1614d3a5ca8c1a5d8057c2db3 0 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6469b1d0ae2049a0c8339bf1614d3a5ca8c1a5d8057c2db3 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.nu8 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.nu8 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.nu8 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=81d6635311b52dba1c06afd2c40c701aaf57c82a74b78711eab3780d0d5a2c43 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.94e 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 81d6635311b52dba1c06afd2c40c701aaf57c82a74b78711eab3780d0d5a2c43 3 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 81d6635311b52dba1c06afd2c40c701aaf57c82a74b78711eab3780d0d5a2c43 3 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=81d6635311b52dba1c06afd2c40c701aaf57c82a74b78711eab3780d0d5a2c43 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.94e 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.94e 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.94e 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f2352f31254a6d53f27f1fea29e0ae96 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.63F 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f2352f31254a6d53f27f1fea29e0ae96 1 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f2352f31254a6d53f27f1fea29e0ae96 1 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f2352f31254a6d53f27f1fea29e0ae96 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.63F 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.63F 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.63F 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9f27e3174bd541671370c2e2c16a11a45f7c7c7acdfc4930 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.8V9 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9f27e3174bd541671370c2e2c16a11a45f7c7c7acdfc4930 2 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9f27e3174bd541671370c2e2c16a11a45f7c7c7acdfc4930 2 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9f27e3174bd541671370c2e2c16a11a45f7c7c7acdfc4930 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.8V9 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.8V9 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.8V9 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:21.289 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=adf28441ba13007f2406655e854035f5c6cc62c2911a5505 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.utJ 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key adf28441ba13007f2406655e854035f5c6cc62c2911a5505 2 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 adf28441ba13007f2406655e854035f5c6cc62c2911a5505 2 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=adf28441ba13007f2406655e854035f5c6cc62c2911a5505 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.utJ 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.utJ 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.utJ 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7d503502fbf623c0a89cea17bd6a68dd 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.rWD 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7d503502fbf623c0a89cea17bd6a68dd 1 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7d503502fbf623c0a89cea17bd6a68dd 1 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7d503502fbf623c0a89cea17bd6a68dd 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.rWD 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.rWD 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.rWD 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6704f9e291e327965c0132aa96905fd8fb885c77b9a94ff6440ebdb2cf5c1819 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kuc 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6704f9e291e327965c0132aa96905fd8fb885c77b9a94ff6440ebdb2cf5c1819 3 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6704f9e291e327965c0132aa96905fd8fb885c77b9a94ff6440ebdb2cf5c1819 3 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6704f9e291e327965c0132aa96905fd8fb885c77b9a94ff6440ebdb2cf5c1819 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kuc 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kuc 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.kuc 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 4143878 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4143878 ']' 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.549 12:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.808 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.808 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:21.808 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 4143915 /var/tmp/host.sock 00:14:21.808 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4143915 ']' 00:14:21.808 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:21.808 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.808 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:21.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:21.808 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.808 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.068 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.068 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:22.068 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:22.068 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.068 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.068 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.068 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:22.068 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nu8 00:14:22.068 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.068 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.068 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.068 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.nu8 00:14:22.068 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.nu8 00:14:22.326 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.94e ]] 00:14:22.326 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.94e 00:14:22.326 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.326 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.326 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.326 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.94e 00:14:22.326 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.94e 00:14:22.585 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:22.585 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.63F 00:14:22.585 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.585 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.585 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.585 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.63F 00:14:22.585 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.63F 00:14:22.585 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.8V9 ]] 00:14:22.585 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8V9 00:14:22.585 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.585 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.585 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.585 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8V9 00:14:22.585 12:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8V9 00:14:22.843 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:22.843 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.utJ 00:14:22.843 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.843 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.843 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.843 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.utJ 00:14:22.843 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.utJ 00:14:23.101 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.rWD ]] 00:14:23.101 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rWD 00:14:23.101 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.101 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.101 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.101 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rWD 00:14:23.101 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rWD 00:14:23.359 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:23.359 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kuc 00:14:23.359 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.359 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.359 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.359 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.kuc 00:14:23.359 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.kuc 00:14:23.359 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:23.359 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:23.359 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:23.359 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.359 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:23.359 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:23.618 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:23.618 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.618 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:23.618 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:23.618 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:23.618 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.618 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.618 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.618 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.618 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.618 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.618 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.618 12:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.876 00:14:23.876 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.876 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.876 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.133 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.133 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.133 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.134 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.134 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.134 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.134 { 00:14:24.134 "cntlid": 1, 00:14:24.134 "qid": 0, 00:14:24.134 "state": "enabled", 00:14:24.134 "thread": "nvmf_tgt_poll_group_000", 00:14:24.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:24.134 "listen_address": { 00:14:24.134 "trtype": "RDMA", 00:14:24.134 "adrfam": "IPv4", 00:14:24.134 "traddr": "192.168.100.8", 00:14:24.134 "trsvcid": "4420" 00:14:24.134 }, 00:14:24.134 "peer_address": { 00:14:24.134 "trtype": "RDMA", 00:14:24.134 "adrfam": "IPv4", 00:14:24.134 "traddr": "192.168.100.8", 00:14:24.134 "trsvcid": "54862" 00:14:24.134 }, 00:14:24.134 "auth": { 00:14:24.134 "state": "completed", 00:14:24.134 "digest": "sha256", 00:14:24.134 "dhgroup": "null" 00:14:24.134 } 00:14:24.134 } 00:14:24.134 ]' 00:14:24.134 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.134 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.134 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.134 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:24.134 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.391 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.391 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.391 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.391 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:14:24.391 12:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:14:25.324 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.324 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.325 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.583 00:14:25.583 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.583 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.583 12:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.841 12:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.841 12:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.841 12:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.841 12:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.841 12:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.841 12:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.841 { 00:14:25.841 "cntlid": 3, 00:14:25.841 "qid": 0, 00:14:25.841 "state": "enabled", 00:14:25.841 "thread": "nvmf_tgt_poll_group_000", 00:14:25.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:25.841 "listen_address": { 00:14:25.841 "trtype": "RDMA", 00:14:25.841 "adrfam": "IPv4", 00:14:25.841 "traddr": "192.168.100.8", 00:14:25.841 "trsvcid": "4420" 00:14:25.841 }, 00:14:25.841 "peer_address": { 00:14:25.841 "trtype": "RDMA", 00:14:25.841 "adrfam": "IPv4", 00:14:25.841 "traddr": "192.168.100.8", 00:14:25.841 "trsvcid": "46342" 00:14:25.841 }, 00:14:25.841 "auth": { 00:14:25.841 "state": "completed", 00:14:25.841 "digest": "sha256", 00:14:25.841 "dhgroup": "null" 00:14:25.841 } 00:14:25.841 } 00:14:25.841 ]' 00:14:25.841 12:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.841 12:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.841 12:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.841 12:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:25.841 12:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.100 12:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.100 12:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.100 12:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.100 12:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:14:26.100 12:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.035 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.293 00:14:27.293 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.293 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.293 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.552 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.552 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.552 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.552 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.552 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.553 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.553 { 00:14:27.553 "cntlid": 5, 00:14:27.553 "qid": 0, 00:14:27.553 "state": "enabled", 00:14:27.553 "thread": "nvmf_tgt_poll_group_000", 00:14:27.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:27.553 "listen_address": { 00:14:27.553 "trtype": "RDMA", 00:14:27.553 "adrfam": "IPv4", 00:14:27.553 "traddr": "192.168.100.8", 00:14:27.553 "trsvcid": "4420" 00:14:27.553 }, 00:14:27.553 "peer_address": { 00:14:27.553 "trtype": "RDMA", 00:14:27.553 "adrfam": "IPv4", 00:14:27.553 "traddr": "192.168.100.8", 00:14:27.553 "trsvcid": "54119" 00:14:27.553 }, 00:14:27.553 "auth": { 00:14:27.553 "state": "completed", 00:14:27.553 "digest": "sha256", 00:14:27.553 "dhgroup": "null" 00:14:27.553 } 00:14:27.553 } 00:14:27.553 ]' 00:14:27.553 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.553 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:27.553 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.553 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:27.553 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.812 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.812 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.812 12:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.812 12:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:14:27.812 12:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:14:28.748 12:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.748 12:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:28.748 12:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.748 12:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.748 12:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.748 12:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.748 12:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:28.748 12:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:28.748 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:28.748 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.748 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:28.748 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:28.748 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:28.748 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.748 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:28.748 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.748 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.748 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.748 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:28.748 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:28.748 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:29.007 00:14:29.007 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.007 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.007 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.266 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.266 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.266 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.266 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.266 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.266 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.266 { 00:14:29.266 "cntlid": 7, 00:14:29.266 "qid": 0, 00:14:29.266 "state": "enabled", 00:14:29.266 "thread": "nvmf_tgt_poll_group_000", 00:14:29.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:29.266 "listen_address": { 00:14:29.266 "trtype": "RDMA", 00:14:29.266 "adrfam": "IPv4", 00:14:29.266 "traddr": "192.168.100.8", 00:14:29.266 "trsvcid": "4420" 00:14:29.266 }, 00:14:29.266 "peer_address": { 00:14:29.266 "trtype": "RDMA", 00:14:29.266 "adrfam": "IPv4", 00:14:29.266 "traddr": "192.168.100.8", 00:14:29.266 "trsvcid": "41160" 00:14:29.266 }, 00:14:29.266 "auth": { 00:14:29.266 "state": "completed", 00:14:29.266 "digest": "sha256", 00:14:29.266 "dhgroup": "null" 00:14:29.266 } 00:14:29.266 } 00:14:29.266 ]' 00:14:29.266 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.266 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.266 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.525 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:29.525 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.525 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.525 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.525 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.525 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:14:29.525 12:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.461 12:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.720 00:14:30.720 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.720 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.720 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.979 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.979 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.979 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.979 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.979 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.979 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.979 { 00:14:30.979 "cntlid": 9, 00:14:30.979 "qid": 0, 00:14:30.979 "state": "enabled", 00:14:30.979 "thread": "nvmf_tgt_poll_group_000", 00:14:30.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:30.979 "listen_address": { 00:14:30.979 "trtype": "RDMA", 00:14:30.979 "adrfam": "IPv4", 00:14:30.979 "traddr": "192.168.100.8", 00:14:30.979 "trsvcid": "4420" 00:14:30.979 }, 00:14:30.979 "peer_address": { 00:14:30.979 "trtype": "RDMA", 00:14:30.979 "adrfam": "IPv4", 00:14:30.979 "traddr": "192.168.100.8", 00:14:30.979 "trsvcid": "56637" 00:14:30.979 }, 00:14:30.979 "auth": { 00:14:30.979 "state": "completed", 00:14:30.979 "digest": "sha256", 00:14:30.979 "dhgroup": "ffdhe2048" 00:14:30.979 } 00:14:30.979 } 00:14:30.979 ]' 00:14:30.979 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.239 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.239 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.239 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:31.239 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.239 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.239 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.239 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.497 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:14:31.497 12:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:14:32.064 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.064 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:32.064 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.064 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.064 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.065 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.065 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:32.065 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:32.324 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:32.324 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.324 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:32.324 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:32.324 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:32.324 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.324 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.324 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.324 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.324 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.324 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.324 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.324 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.583 00:14:32.583 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.583 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.583 12:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.841 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.842 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.842 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.842 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.842 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.842 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.842 { 00:14:32.842 "cntlid": 11, 00:14:32.842 "qid": 0, 00:14:32.842 "state": "enabled", 00:14:32.842 "thread": "nvmf_tgt_poll_group_000", 00:14:32.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:32.842 "listen_address": { 00:14:32.842 "trtype": "RDMA", 00:14:32.842 "adrfam": "IPv4", 00:14:32.842 "traddr": "192.168.100.8", 00:14:32.842 "trsvcid": "4420" 00:14:32.842 }, 00:14:32.842 "peer_address": { 00:14:32.842 "trtype": "RDMA", 00:14:32.842 "adrfam": "IPv4", 00:14:32.842 "traddr": "192.168.100.8", 00:14:32.842 "trsvcid": "41731" 00:14:32.842 }, 00:14:32.842 "auth": { 00:14:32.842 "state": "completed", 00:14:32.842 "digest": "sha256", 00:14:32.842 "dhgroup": "ffdhe2048" 00:14:32.842 } 00:14:32.842 } 00:14:32.842 ]' 00:14:32.842 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.842 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.842 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.842 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:32.842 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.842 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.842 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.842 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.100 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:14:33.100 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:14:33.667 12:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.926 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.185 00:14:34.444 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.444 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.444 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.444 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.444 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.444 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.444 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.444 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.444 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.444 { 00:14:34.444 "cntlid": 13, 00:14:34.444 "qid": 0, 00:14:34.444 "state": "enabled", 00:14:34.444 "thread": "nvmf_tgt_poll_group_000", 00:14:34.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:34.444 "listen_address": { 00:14:34.444 "trtype": "RDMA", 00:14:34.444 "adrfam": "IPv4", 00:14:34.444 "traddr": "192.168.100.8", 00:14:34.444 "trsvcid": "4420" 00:14:34.444 }, 00:14:34.444 "peer_address": { 00:14:34.444 "trtype": "RDMA", 00:14:34.444 "adrfam": "IPv4", 00:14:34.444 "traddr": "192.168.100.8", 00:14:34.444 "trsvcid": "38707" 00:14:34.444 }, 00:14:34.444 "auth": { 00:14:34.444 "state": "completed", 00:14:34.444 "digest": "sha256", 00:14:34.444 "dhgroup": "ffdhe2048" 00:14:34.444 } 00:14:34.444 } 00:14:34.444 ]' 00:14:34.444 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.703 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.703 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.703 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:34.703 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.703 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.703 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.703 12:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.961 12:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:14:34.961 12:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:14:35.528 12:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.528 12:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:35.528 12:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.528 12:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.528 12:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.528 12:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.529 12:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:35.529 12:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:35.787 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:35.788 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.788 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:35.788 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:35.788 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:35.788 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.788 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:35.788 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.788 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.788 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.788 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:35.788 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.788 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:36.046 00:14:36.046 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.046 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.046 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.305 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.305 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.305 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.305 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.305 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.305 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.305 { 00:14:36.305 "cntlid": 15, 00:14:36.305 "qid": 0, 00:14:36.305 "state": "enabled", 00:14:36.305 "thread": "nvmf_tgt_poll_group_000", 00:14:36.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:36.305 "listen_address": { 00:14:36.305 "trtype": "RDMA", 00:14:36.305 "adrfam": "IPv4", 00:14:36.305 "traddr": "192.168.100.8", 00:14:36.305 "trsvcid": "4420" 00:14:36.305 }, 00:14:36.305 "peer_address": { 00:14:36.305 "trtype": "RDMA", 00:14:36.305 "adrfam": "IPv4", 00:14:36.305 "traddr": "192.168.100.8", 00:14:36.305 "trsvcid": "41010" 00:14:36.305 }, 00:14:36.305 "auth": { 00:14:36.305 "state": "completed", 00:14:36.305 "digest": "sha256", 00:14:36.305 "dhgroup": "ffdhe2048" 00:14:36.305 } 00:14:36.305 } 00:14:36.305 ]' 00:14:36.305 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.305 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.305 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.305 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:36.305 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.305 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.305 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.305 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.564 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:14:36.564 12:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:14:37.131 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.389 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:37.389 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.389 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.389 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.389 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:37.389 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.389 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:37.389 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:37.389 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:37.389 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.389 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:37.389 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:37.389 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:37.389 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.389 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.389 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.390 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.390 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.390 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.390 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.390 12:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.648 00:14:37.907 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.907 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.907 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.907 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.907 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.907 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.907 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.907 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.907 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.907 { 00:14:37.907 "cntlid": 17, 00:14:37.907 "qid": 0, 00:14:37.907 "state": "enabled", 00:14:37.907 "thread": "nvmf_tgt_poll_group_000", 00:14:37.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:37.907 "listen_address": { 00:14:37.907 "trtype": "RDMA", 00:14:37.907 "adrfam": "IPv4", 00:14:37.907 "traddr": "192.168.100.8", 00:14:37.907 "trsvcid": "4420" 00:14:37.907 }, 00:14:37.907 "peer_address": { 00:14:37.907 "trtype": "RDMA", 00:14:37.907 "adrfam": "IPv4", 00:14:37.907 "traddr": "192.168.100.8", 00:14:37.907 "trsvcid": "43501" 00:14:37.907 }, 00:14:37.907 "auth": { 00:14:37.907 "state": "completed", 00:14:37.907 "digest": "sha256", 00:14:37.907 "dhgroup": "ffdhe3072" 00:14:37.907 } 00:14:37.907 } 00:14:37.907 ]' 00:14:37.907 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.907 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.907 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.166 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:38.166 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.166 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.166 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.166 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.424 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:14:38.424 12:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:14:38.991 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.991 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:38.991 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.991 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.991 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.991 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.991 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:38.991 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:39.278 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:39.278 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.278 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:39.278 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:39.278 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:39.278 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.278 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.278 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.278 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.278 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.278 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.278 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.278 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.584 00:14:39.584 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.584 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.584 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.948 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.949 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.949 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.949 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.949 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.949 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.949 { 00:14:39.949 "cntlid": 19, 00:14:39.949 "qid": 0, 00:14:39.949 "state": "enabled", 00:14:39.949 "thread": "nvmf_tgt_poll_group_000", 00:14:39.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:39.949 "listen_address": { 00:14:39.949 "trtype": "RDMA", 00:14:39.949 "adrfam": "IPv4", 00:14:39.949 "traddr": "192.168.100.8", 00:14:39.949 "trsvcid": "4420" 00:14:39.949 }, 00:14:39.949 "peer_address": { 00:14:39.949 "trtype": "RDMA", 00:14:39.949 "adrfam": "IPv4", 00:14:39.949 "traddr": "192.168.100.8", 00:14:39.949 "trsvcid": "60972" 00:14:39.949 }, 00:14:39.949 "auth": { 00:14:39.949 "state": "completed", 00:14:39.949 "digest": "sha256", 00:14:39.949 "dhgroup": "ffdhe3072" 00:14:39.949 } 00:14:39.949 } 00:14:39.949 ]' 00:14:39.949 12:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.949 12:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.949 12:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.949 12:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:39.949 12:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.949 12:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.949 12:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.949 12:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.208 12:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:14:40.208 12:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:14:40.777 12:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.777 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:40.777 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.777 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.777 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.777 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.777 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:40.777 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:41.036 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:41.036 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.036 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:41.036 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:41.036 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:41.036 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.036 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.036 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.036 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.036 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.036 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.036 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.036 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.295 00:14:41.295 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.295 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.295 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.555 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.555 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.555 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.555 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.555 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.555 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.555 { 00:14:41.555 "cntlid": 21, 00:14:41.555 "qid": 0, 00:14:41.555 "state": "enabled", 00:14:41.555 "thread": "nvmf_tgt_poll_group_000", 00:14:41.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:41.555 "listen_address": { 00:14:41.555 "trtype": "RDMA", 00:14:41.555 "adrfam": "IPv4", 00:14:41.555 "traddr": "192.168.100.8", 00:14:41.555 "trsvcid": "4420" 00:14:41.555 }, 00:14:41.555 "peer_address": { 00:14:41.555 "trtype": "RDMA", 00:14:41.555 "adrfam": "IPv4", 00:14:41.555 "traddr": "192.168.100.8", 00:14:41.555 "trsvcid": "58894" 00:14:41.555 }, 00:14:41.555 "auth": { 00:14:41.555 "state": "completed", 00:14:41.555 "digest": "sha256", 00:14:41.555 "dhgroup": "ffdhe3072" 00:14:41.555 } 00:14:41.555 } 00:14:41.555 ]' 00:14:41.555 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.555 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.555 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.555 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:41.555 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.555 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.555 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.555 12:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.814 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:14:41.814 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:14:42.381 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.381 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:42.381 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.381 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:42.640 12:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:42.899 00:14:42.899 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.899 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.899 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.157 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.157 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.157 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.158 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.158 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.158 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.158 { 00:14:43.158 "cntlid": 23, 00:14:43.158 "qid": 0, 00:14:43.158 "state": "enabled", 00:14:43.158 "thread": "nvmf_tgt_poll_group_000", 00:14:43.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:43.158 "listen_address": { 00:14:43.158 "trtype": "RDMA", 00:14:43.158 "adrfam": "IPv4", 00:14:43.158 "traddr": "192.168.100.8", 00:14:43.158 "trsvcid": "4420" 00:14:43.158 }, 00:14:43.158 "peer_address": { 00:14:43.158 "trtype": "RDMA", 00:14:43.158 "adrfam": "IPv4", 00:14:43.158 "traddr": "192.168.100.8", 00:14:43.158 "trsvcid": "58660" 00:14:43.158 }, 00:14:43.158 "auth": { 00:14:43.158 "state": "completed", 00:14:43.158 "digest": "sha256", 00:14:43.158 "dhgroup": "ffdhe3072" 00:14:43.158 } 00:14:43.158 } 00:14:43.158 ]' 00:14:43.158 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.158 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.158 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.417 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:43.417 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.417 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.417 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.417 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.417 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:14:43.417 12:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.353 12:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.921 00:14:44.921 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.921 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.921 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.921 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.921 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.921 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.921 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.921 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.921 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.921 { 00:14:44.921 "cntlid": 25, 00:14:44.921 "qid": 0, 00:14:44.921 "state": "enabled", 00:14:44.921 "thread": "nvmf_tgt_poll_group_000", 00:14:44.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:44.921 "listen_address": { 00:14:44.921 "trtype": "RDMA", 00:14:44.921 "adrfam": "IPv4", 00:14:44.921 "traddr": "192.168.100.8", 00:14:44.921 "trsvcid": "4420" 00:14:44.921 }, 00:14:44.921 "peer_address": { 00:14:44.921 "trtype": "RDMA", 00:14:44.921 "adrfam": "IPv4", 00:14:44.921 "traddr": "192.168.100.8", 00:14:44.921 "trsvcid": "44020" 00:14:44.921 }, 00:14:44.921 "auth": { 00:14:44.921 "state": "completed", 00:14:44.921 "digest": "sha256", 00:14:44.921 "dhgroup": "ffdhe4096" 00:14:44.921 } 00:14:44.921 } 00:14:44.921 ]' 00:14:44.921 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.922 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.922 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.180 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:45.180 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.180 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.180 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.181 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.440 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:14:45.440 12:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:14:46.007 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.008 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:46.008 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.008 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.008 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.008 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.008 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:46.008 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:46.266 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:46.266 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.266 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:46.266 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:46.267 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:46.267 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.267 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.267 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.267 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.267 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.267 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.267 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.267 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.526 00:14:46.526 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.526 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.526 12:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.785 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.785 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.785 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.785 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.786 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.786 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.786 { 00:14:46.786 "cntlid": 27, 00:14:46.786 "qid": 0, 00:14:46.786 "state": "enabled", 00:14:46.786 "thread": "nvmf_tgt_poll_group_000", 00:14:46.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:46.786 "listen_address": { 00:14:46.786 "trtype": "RDMA", 00:14:46.786 "adrfam": "IPv4", 00:14:46.786 "traddr": "192.168.100.8", 00:14:46.786 "trsvcid": "4420" 00:14:46.786 }, 00:14:46.786 "peer_address": { 00:14:46.786 "trtype": "RDMA", 00:14:46.786 "adrfam": "IPv4", 00:14:46.786 "traddr": "192.168.100.8", 00:14:46.786 "trsvcid": "37131" 00:14:46.786 }, 00:14:46.786 "auth": { 00:14:46.786 "state": "completed", 00:14:46.786 "digest": "sha256", 00:14:46.786 "dhgroup": "ffdhe4096" 00:14:46.786 } 00:14:46.786 } 00:14:46.786 ]' 00:14:46.786 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.786 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.786 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.786 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:46.786 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.786 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.786 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.786 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.045 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:14:47.045 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:14:47.612 12:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.871 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:47.871 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.871 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.871 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.871 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.871 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:47.872 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:48.131 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:48.131 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.131 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:48.131 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:48.131 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:48.131 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.131 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.131 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.131 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.131 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.131 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.131 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.131 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.390 00:14:48.390 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.390 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.390 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.390 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.390 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.390 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.390 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.649 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.650 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.650 { 00:14:48.650 "cntlid": 29, 00:14:48.650 "qid": 0, 00:14:48.650 "state": "enabled", 00:14:48.650 "thread": "nvmf_tgt_poll_group_000", 00:14:48.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:48.650 "listen_address": { 00:14:48.650 "trtype": "RDMA", 00:14:48.650 "adrfam": "IPv4", 00:14:48.650 "traddr": "192.168.100.8", 00:14:48.650 "trsvcid": "4420" 00:14:48.650 }, 00:14:48.650 "peer_address": { 00:14:48.650 "trtype": "RDMA", 00:14:48.650 "adrfam": "IPv4", 00:14:48.650 "traddr": "192.168.100.8", 00:14:48.650 "trsvcid": "47733" 00:14:48.650 }, 00:14:48.650 "auth": { 00:14:48.650 "state": "completed", 00:14:48.650 "digest": "sha256", 00:14:48.650 "dhgroup": "ffdhe4096" 00:14:48.650 } 00:14:48.650 } 00:14:48.650 ]' 00:14:48.650 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.650 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.650 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.650 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:48.650 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.650 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.650 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.650 12:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.909 12:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:14:48.909 12:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:14:49.476 12:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.476 12:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:49.476 12:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.476 12:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.476 12:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.476 12:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.476 12:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:49.476 12:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:49.736 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:49.736 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.736 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:49.736 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:49.736 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:49.736 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.736 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:49.736 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.736 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.736 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.736 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:49.736 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:49.736 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:49.994 00:14:49.994 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.994 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.994 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.253 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.253 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.253 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.253 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.253 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.253 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.253 { 00:14:50.253 "cntlid": 31, 00:14:50.253 "qid": 0, 00:14:50.253 "state": "enabled", 00:14:50.253 "thread": "nvmf_tgt_poll_group_000", 00:14:50.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:50.253 "listen_address": { 00:14:50.253 "trtype": "RDMA", 00:14:50.253 "adrfam": "IPv4", 00:14:50.253 "traddr": "192.168.100.8", 00:14:50.253 "trsvcid": "4420" 00:14:50.253 }, 00:14:50.253 "peer_address": { 00:14:50.253 "trtype": "RDMA", 00:14:50.253 "adrfam": "IPv4", 00:14:50.253 "traddr": "192.168.100.8", 00:14:50.253 "trsvcid": "50468" 00:14:50.253 }, 00:14:50.253 "auth": { 00:14:50.253 "state": "completed", 00:14:50.253 "digest": "sha256", 00:14:50.253 "dhgroup": "ffdhe4096" 00:14:50.253 } 00:14:50.253 } 00:14:50.253 ]' 00:14:50.253 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.253 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.253 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.253 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:50.253 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.253 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.253 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.253 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.512 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:14:50.512 12:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:14:51.081 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.340 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:51.340 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.340 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.340 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.340 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.340 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.340 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:51.340 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:51.600 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:51.600 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.600 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:51.600 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:51.600 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:51.600 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.600 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.600 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.600 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.600 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.600 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.600 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.600 12:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.860 00:14:51.860 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.860 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.860 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.120 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.120 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.120 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.120 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.120 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.120 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.120 { 00:14:52.120 "cntlid": 33, 00:14:52.120 "qid": 0, 00:14:52.120 "state": "enabled", 00:14:52.120 "thread": "nvmf_tgt_poll_group_000", 00:14:52.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:52.120 "listen_address": { 00:14:52.120 "trtype": "RDMA", 00:14:52.120 "adrfam": "IPv4", 00:14:52.120 "traddr": "192.168.100.8", 00:14:52.120 "trsvcid": "4420" 00:14:52.120 }, 00:14:52.120 "peer_address": { 00:14:52.120 "trtype": "RDMA", 00:14:52.120 "adrfam": "IPv4", 00:14:52.120 "traddr": "192.168.100.8", 00:14:52.120 "trsvcid": "54420" 00:14:52.120 }, 00:14:52.120 "auth": { 00:14:52.120 "state": "completed", 00:14:52.120 "digest": "sha256", 00:14:52.120 "dhgroup": "ffdhe6144" 00:14:52.120 } 00:14:52.120 } 00:14:52.120 ]' 00:14:52.120 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.120 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.120 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.120 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:52.120 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.120 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.120 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.120 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.379 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:14:52.379 12:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:14:52.947 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.207 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.776 00:14:53.776 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.776 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.776 12:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.776 12:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.776 12:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.776 12:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.777 12:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.777 12:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.777 12:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.777 { 00:14:53.777 "cntlid": 35, 00:14:53.777 "qid": 0, 00:14:53.777 "state": "enabled", 00:14:53.777 "thread": "nvmf_tgt_poll_group_000", 00:14:53.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:53.777 "listen_address": { 00:14:53.777 "trtype": "RDMA", 00:14:53.777 "adrfam": "IPv4", 00:14:53.777 "traddr": "192.168.100.8", 00:14:53.777 "trsvcid": "4420" 00:14:53.777 }, 00:14:53.777 "peer_address": { 00:14:53.777 "trtype": "RDMA", 00:14:53.777 "adrfam": "IPv4", 00:14:53.777 "traddr": "192.168.100.8", 00:14:53.777 "trsvcid": "57118" 00:14:53.777 }, 00:14:53.777 "auth": { 00:14:53.777 "state": "completed", 00:14:53.777 "digest": "sha256", 00:14:53.777 "dhgroup": "ffdhe6144" 00:14:53.777 } 00:14:53.777 } 00:14:53.777 ]' 00:14:53.777 12:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.036 12:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.036 12:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.036 12:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:54.036 12:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.036 12:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.036 12:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.036 12:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.295 12:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:14:54.295 12:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:14:54.862 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.863 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:54.863 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.863 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.863 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.863 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.863 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:54.863 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:55.122 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:55.122 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.122 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:55.122 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:55.122 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:55.122 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.122 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.122 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.122 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.122 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.122 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.122 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.122 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.381 00:14:55.640 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.640 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.640 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.640 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.640 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.640 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.640 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.640 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.640 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.640 { 00:14:55.640 "cntlid": 37, 00:14:55.640 "qid": 0, 00:14:55.640 "state": "enabled", 00:14:55.640 "thread": "nvmf_tgt_poll_group_000", 00:14:55.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:55.640 "listen_address": { 00:14:55.640 "trtype": "RDMA", 00:14:55.640 "adrfam": "IPv4", 00:14:55.640 "traddr": "192.168.100.8", 00:14:55.640 "trsvcid": "4420" 00:14:55.640 }, 00:14:55.640 "peer_address": { 00:14:55.640 "trtype": "RDMA", 00:14:55.640 "adrfam": "IPv4", 00:14:55.640 "traddr": "192.168.100.8", 00:14:55.640 "trsvcid": "34261" 00:14:55.640 }, 00:14:55.640 "auth": { 00:14:55.640 "state": "completed", 00:14:55.640 "digest": "sha256", 00:14:55.640 "dhgroup": "ffdhe6144" 00:14:55.640 } 00:14:55.640 } 00:14:55.640 ]' 00:14:55.640 12:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.640 12:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.899 12:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.899 12:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:55.899 12:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.899 12:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.899 12:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.899 12:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.158 12:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:14:56.158 12:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:14:56.726 12:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.726 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:56.726 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.726 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.726 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.726 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.726 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:56.726 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:56.986 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:56.986 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.986 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:56.986 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:56.986 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:56.986 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.986 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:56.986 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.986 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.986 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.986 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:56.986 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:56.986 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:57.245 00:14:57.245 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.245 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.245 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.503 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.503 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.503 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.503 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.503 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.503 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.503 { 00:14:57.504 "cntlid": 39, 00:14:57.504 "qid": 0, 00:14:57.504 "state": "enabled", 00:14:57.504 "thread": "nvmf_tgt_poll_group_000", 00:14:57.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:57.504 "listen_address": { 00:14:57.504 "trtype": "RDMA", 00:14:57.504 "adrfam": "IPv4", 00:14:57.504 "traddr": "192.168.100.8", 00:14:57.504 "trsvcid": "4420" 00:14:57.504 }, 00:14:57.504 "peer_address": { 00:14:57.504 "trtype": "RDMA", 00:14:57.504 "adrfam": "IPv4", 00:14:57.504 "traddr": "192.168.100.8", 00:14:57.504 "trsvcid": "54283" 00:14:57.504 }, 00:14:57.504 "auth": { 00:14:57.504 "state": "completed", 00:14:57.504 "digest": "sha256", 00:14:57.504 "dhgroup": "ffdhe6144" 00:14:57.504 } 00:14:57.504 } 00:14:57.504 ]' 00:14:57.504 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.504 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.504 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.504 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:57.504 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.762 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.762 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.762 12:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.762 12:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:14:57.762 12:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:14:58.329 12:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.588 12:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:58.588 12:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.588 12:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.588 12:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.588 12:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:58.588 12:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.588 12:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:58.588 12:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:58.847 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:58.847 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.847 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:58.847 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:58.847 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:58.847 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.847 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.847 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.847 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.847 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.847 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.847 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.847 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.417 00:14:59.417 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.417 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.417 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.417 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.417 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.417 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.417 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.417 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.417 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.417 { 00:14:59.417 "cntlid": 41, 00:14:59.417 "qid": 0, 00:14:59.417 "state": "enabled", 00:14:59.417 "thread": "nvmf_tgt_poll_group_000", 00:14:59.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:59.417 "listen_address": { 00:14:59.417 "trtype": "RDMA", 00:14:59.417 "adrfam": "IPv4", 00:14:59.417 "traddr": "192.168.100.8", 00:14:59.417 "trsvcid": "4420" 00:14:59.417 }, 00:14:59.417 "peer_address": { 00:14:59.417 "trtype": "RDMA", 00:14:59.417 "adrfam": "IPv4", 00:14:59.417 "traddr": "192.168.100.8", 00:14:59.417 "trsvcid": "54503" 00:14:59.417 }, 00:14:59.417 "auth": { 00:14:59.417 "state": "completed", 00:14:59.417 "digest": "sha256", 00:14:59.417 "dhgroup": "ffdhe8192" 00:14:59.417 } 00:14:59.417 } 00:14:59.417 ]' 00:14:59.417 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.417 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.417 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.417 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:59.417 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.676 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.676 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.676 12:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.676 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:14:59.676 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:00.243 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.502 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:00.502 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.502 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.502 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.502 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.502 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:00.502 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:00.763 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:00.763 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.763 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:00.763 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:00.763 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:00.763 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.763 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.763 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.763 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.763 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.763 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.763 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.763 12:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.022 00:15:01.282 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.282 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.282 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.282 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.282 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.282 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.282 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.282 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.282 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.282 { 00:15:01.282 "cntlid": 43, 00:15:01.282 "qid": 0, 00:15:01.282 "state": "enabled", 00:15:01.282 "thread": "nvmf_tgt_poll_group_000", 00:15:01.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:01.282 "listen_address": { 00:15:01.282 "trtype": "RDMA", 00:15:01.282 "adrfam": "IPv4", 00:15:01.282 "traddr": "192.168.100.8", 00:15:01.282 "trsvcid": "4420" 00:15:01.282 }, 00:15:01.282 "peer_address": { 00:15:01.282 "trtype": "RDMA", 00:15:01.282 "adrfam": "IPv4", 00:15:01.282 "traddr": "192.168.100.8", 00:15:01.282 "trsvcid": "47722" 00:15:01.282 }, 00:15:01.282 "auth": { 00:15:01.282 "state": "completed", 00:15:01.282 "digest": "sha256", 00:15:01.282 "dhgroup": "ffdhe8192" 00:15:01.282 } 00:15:01.282 } 00:15:01.282 ]' 00:15:01.282 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.282 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.282 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.541 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:01.541 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.541 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.541 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.541 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.801 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:01.801 12:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:02.368 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.368 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:02.368 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.368 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.368 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.369 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.369 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:02.369 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:02.628 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:02.628 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.628 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:02.628 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:02.628 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:02.628 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.628 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.628 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.628 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.628 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.628 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.628 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.628 12:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.195 00:15:03.195 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.195 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.195 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.195 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.195 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.195 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.195 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.195 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.455 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.455 { 00:15:03.455 "cntlid": 45, 00:15:03.455 "qid": 0, 00:15:03.455 "state": "enabled", 00:15:03.455 "thread": "nvmf_tgt_poll_group_000", 00:15:03.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:03.455 "listen_address": { 00:15:03.455 "trtype": "RDMA", 00:15:03.455 "adrfam": "IPv4", 00:15:03.455 "traddr": "192.168.100.8", 00:15:03.455 "trsvcid": "4420" 00:15:03.455 }, 00:15:03.455 "peer_address": { 00:15:03.455 "trtype": "RDMA", 00:15:03.455 "adrfam": "IPv4", 00:15:03.455 "traddr": "192.168.100.8", 00:15:03.455 "trsvcid": "46247" 00:15:03.455 }, 00:15:03.455 "auth": { 00:15:03.455 "state": "completed", 00:15:03.455 "digest": "sha256", 00:15:03.455 "dhgroup": "ffdhe8192" 00:15:03.455 } 00:15:03.455 } 00:15:03.455 ]' 00:15:03.456 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.456 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.456 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.456 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:03.456 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.456 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.456 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.456 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.715 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:15:03.715 12:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:15:04.284 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.284 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:04.284 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.284 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.284 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.284 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.284 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.284 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.543 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:04.543 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.543 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.543 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:04.543 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:04.543 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.543 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:04.543 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.543 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.543 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.543 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:04.543 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:04.543 12:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:05.111 00:15:05.111 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.112 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.112 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.112 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.371 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.371 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.371 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.371 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.371 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.371 { 00:15:05.371 "cntlid": 47, 00:15:05.371 "qid": 0, 00:15:05.371 "state": "enabled", 00:15:05.371 "thread": "nvmf_tgt_poll_group_000", 00:15:05.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:05.371 "listen_address": { 00:15:05.371 "trtype": "RDMA", 00:15:05.371 "adrfam": "IPv4", 00:15:05.371 "traddr": "192.168.100.8", 00:15:05.371 "trsvcid": "4420" 00:15:05.371 }, 00:15:05.371 "peer_address": { 00:15:05.371 "trtype": "RDMA", 00:15:05.371 "adrfam": "IPv4", 00:15:05.371 "traddr": "192.168.100.8", 00:15:05.371 "trsvcid": "37773" 00:15:05.371 }, 00:15:05.371 "auth": { 00:15:05.371 "state": "completed", 00:15:05.371 "digest": "sha256", 00:15:05.371 "dhgroup": "ffdhe8192" 00:15:05.371 } 00:15:05.371 } 00:15:05.371 ]' 00:15:05.371 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.371 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.371 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.371 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:05.371 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.371 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.371 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.371 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.630 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:15:05.630 12:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:15:06.198 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.198 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:06.198 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.198 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.198 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.198 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:06.198 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.198 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.199 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:06.199 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:06.458 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:06.458 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.458 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:06.458 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:06.458 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:06.458 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.458 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.458 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.458 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.458 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.458 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.458 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.458 12:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.717 00:15:06.717 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.717 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.717 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.976 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.976 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.976 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.976 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.976 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.976 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.976 { 00:15:06.976 "cntlid": 49, 00:15:06.976 "qid": 0, 00:15:06.976 "state": "enabled", 00:15:06.976 "thread": "nvmf_tgt_poll_group_000", 00:15:06.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:06.976 "listen_address": { 00:15:06.976 "trtype": "RDMA", 00:15:06.976 "adrfam": "IPv4", 00:15:06.976 "traddr": "192.168.100.8", 00:15:06.976 "trsvcid": "4420" 00:15:06.976 }, 00:15:06.976 "peer_address": { 00:15:06.976 "trtype": "RDMA", 00:15:06.976 "adrfam": "IPv4", 00:15:06.976 "traddr": "192.168.100.8", 00:15:06.976 "trsvcid": "50584" 00:15:06.976 }, 00:15:06.976 "auth": { 00:15:06.976 "state": "completed", 00:15:06.976 "digest": "sha384", 00:15:06.976 "dhgroup": "null" 00:15:06.976 } 00:15:06.976 } 00:15:06.976 ]' 00:15:06.976 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.976 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.976 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.976 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:06.976 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.976 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.976 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.976 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.235 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:07.235 12:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:07.803 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.063 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:08.063 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.063 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.063 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.063 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.063 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:08.063 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:08.322 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:08.322 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.322 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:08.322 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:08.322 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:08.322 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.322 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.322 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.322 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.322 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.322 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.322 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.322 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.582 00:15:08.582 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.582 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.582 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.582 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.582 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.582 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.582 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.582 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.582 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.582 { 00:15:08.582 "cntlid": 51, 00:15:08.582 "qid": 0, 00:15:08.582 "state": "enabled", 00:15:08.582 "thread": "nvmf_tgt_poll_group_000", 00:15:08.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:08.582 "listen_address": { 00:15:08.582 "trtype": "RDMA", 00:15:08.582 "adrfam": "IPv4", 00:15:08.582 "traddr": "192.168.100.8", 00:15:08.582 "trsvcid": "4420" 00:15:08.582 }, 00:15:08.582 "peer_address": { 00:15:08.582 "trtype": "RDMA", 00:15:08.582 "adrfam": "IPv4", 00:15:08.582 "traddr": "192.168.100.8", 00:15:08.582 "trsvcid": "33152" 00:15:08.582 }, 00:15:08.582 "auth": { 00:15:08.582 "state": "completed", 00:15:08.582 "digest": "sha384", 00:15:08.582 "dhgroup": "null" 00:15:08.582 } 00:15:08.582 } 00:15:08.582 ]' 00:15:08.582 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.842 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.842 12:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.842 12:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:08.842 12:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.842 12:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.842 12:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.842 12:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.101 12:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:09.101 12:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:09.669 12:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.669 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:09.670 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.670 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.670 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.670 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.670 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:09.670 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:09.929 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:09.929 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.929 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:09.929 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:09.929 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:09.929 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.929 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.929 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.929 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.929 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.929 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.929 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.929 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.188 00:15:10.188 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.188 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.188 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.447 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.447 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.447 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.447 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.447 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.447 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.447 { 00:15:10.447 "cntlid": 53, 00:15:10.447 "qid": 0, 00:15:10.447 "state": "enabled", 00:15:10.447 "thread": "nvmf_tgt_poll_group_000", 00:15:10.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:10.447 "listen_address": { 00:15:10.447 "trtype": "RDMA", 00:15:10.447 "adrfam": "IPv4", 00:15:10.447 "traddr": "192.168.100.8", 00:15:10.447 "trsvcid": "4420" 00:15:10.447 }, 00:15:10.447 "peer_address": { 00:15:10.447 "trtype": "RDMA", 00:15:10.447 "adrfam": "IPv4", 00:15:10.447 "traddr": "192.168.100.8", 00:15:10.447 "trsvcid": "44871" 00:15:10.447 }, 00:15:10.447 "auth": { 00:15:10.447 "state": "completed", 00:15:10.447 "digest": "sha384", 00:15:10.447 "dhgroup": "null" 00:15:10.447 } 00:15:10.447 } 00:15:10.447 ]' 00:15:10.447 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.447 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.447 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.448 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:10.448 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.707 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.707 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.707 12:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.707 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:15:10.707 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:15:11.276 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.536 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:11.536 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.536 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.536 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.536 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.536 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.536 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.796 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:11.796 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.796 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:11.796 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:11.796 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:11.796 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.796 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:11.796 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.796 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.796 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.796 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:11.796 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.796 12:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.796 00:15:12.055 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.055 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.055 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.055 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.055 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.055 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.055 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.055 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.055 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.055 { 00:15:12.055 "cntlid": 55, 00:15:12.055 "qid": 0, 00:15:12.055 "state": "enabled", 00:15:12.055 "thread": "nvmf_tgt_poll_group_000", 00:15:12.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:12.055 "listen_address": { 00:15:12.055 "trtype": "RDMA", 00:15:12.055 "adrfam": "IPv4", 00:15:12.055 "traddr": "192.168.100.8", 00:15:12.055 "trsvcid": "4420" 00:15:12.055 }, 00:15:12.055 "peer_address": { 00:15:12.055 "trtype": "RDMA", 00:15:12.055 "adrfam": "IPv4", 00:15:12.055 "traddr": "192.168.100.8", 00:15:12.055 "trsvcid": "39137" 00:15:12.055 }, 00:15:12.055 "auth": { 00:15:12.056 "state": "completed", 00:15:12.056 "digest": "sha384", 00:15:12.056 "dhgroup": "null" 00:15:12.056 } 00:15:12.056 } 00:15:12.056 ]' 00:15:12.056 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.315 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.315 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.315 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:12.315 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.315 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.315 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.315 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.574 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:15:12.574 12:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:15:13.142 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.142 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:13.142 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.142 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.142 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.142 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.142 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.142 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:13.142 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:13.401 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:13.401 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.401 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:13.401 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:13.401 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:13.401 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.401 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.401 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.401 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.401 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.401 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.401 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.401 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.661 00:15:13.661 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.661 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.661 12:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.919 12:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.920 12:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.920 12:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.920 12:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.920 12:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.920 12:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.920 { 00:15:13.920 "cntlid": 57, 00:15:13.920 "qid": 0, 00:15:13.920 "state": "enabled", 00:15:13.920 "thread": "nvmf_tgt_poll_group_000", 00:15:13.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:13.920 "listen_address": { 00:15:13.920 "trtype": "RDMA", 00:15:13.920 "adrfam": "IPv4", 00:15:13.920 "traddr": "192.168.100.8", 00:15:13.920 "trsvcid": "4420" 00:15:13.920 }, 00:15:13.920 "peer_address": { 00:15:13.920 "trtype": "RDMA", 00:15:13.920 "adrfam": "IPv4", 00:15:13.920 "traddr": "192.168.100.8", 00:15:13.920 "trsvcid": "47286" 00:15:13.920 }, 00:15:13.920 "auth": { 00:15:13.920 "state": "completed", 00:15:13.920 "digest": "sha384", 00:15:13.920 "dhgroup": "ffdhe2048" 00:15:13.920 } 00:15:13.920 } 00:15:13.920 ]' 00:15:13.920 12:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.920 12:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.920 12:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.920 12:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.920 12:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.920 12:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.920 12:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.920 12:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.179 12:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:14.179 12:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:14.746 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.004 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:15.004 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.004 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.004 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.004 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.004 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:15.004 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:15.262 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:15.262 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.262 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:15.262 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:15.262 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:15.262 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.262 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.262 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.262 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.262 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.262 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.262 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.262 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.521 00:15:15.521 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.521 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.521 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.521 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.521 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.521 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.521 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.521 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.521 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.521 { 00:15:15.521 "cntlid": 59, 00:15:15.521 "qid": 0, 00:15:15.521 "state": "enabled", 00:15:15.521 "thread": "nvmf_tgt_poll_group_000", 00:15:15.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:15.521 "listen_address": { 00:15:15.521 "trtype": "RDMA", 00:15:15.521 "adrfam": "IPv4", 00:15:15.521 "traddr": "192.168.100.8", 00:15:15.521 "trsvcid": "4420" 00:15:15.521 }, 00:15:15.521 "peer_address": { 00:15:15.521 "trtype": "RDMA", 00:15:15.521 "adrfam": "IPv4", 00:15:15.521 "traddr": "192.168.100.8", 00:15:15.521 "trsvcid": "44585" 00:15:15.521 }, 00:15:15.521 "auth": { 00:15:15.521 "state": "completed", 00:15:15.521 "digest": "sha384", 00:15:15.521 "dhgroup": "ffdhe2048" 00:15:15.521 } 00:15:15.521 } 00:15:15.521 ]' 00:15:15.521 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.779 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.779 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.779 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:15.779 12:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.779 12:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.779 12:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.779 12:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.036 12:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:16.036 12:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:16.603 12:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.603 12:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:16.603 12:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.603 12:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.603 12:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.603 12:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.603 12:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:16.603 12:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:16.861 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:16.861 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.861 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:16.861 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:16.861 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:16.861 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.861 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.861 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.861 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.861 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.861 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.861 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.861 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.119 00:15:17.119 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.119 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.119 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.377 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.377 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.377 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.377 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.377 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.377 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.377 { 00:15:17.377 "cntlid": 61, 00:15:17.377 "qid": 0, 00:15:17.377 "state": "enabled", 00:15:17.377 "thread": "nvmf_tgt_poll_group_000", 00:15:17.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:17.377 "listen_address": { 00:15:17.377 "trtype": "RDMA", 00:15:17.377 "adrfam": "IPv4", 00:15:17.377 "traddr": "192.168.100.8", 00:15:17.377 "trsvcid": "4420" 00:15:17.377 }, 00:15:17.377 "peer_address": { 00:15:17.377 "trtype": "RDMA", 00:15:17.377 "adrfam": "IPv4", 00:15:17.377 "traddr": "192.168.100.8", 00:15:17.377 "trsvcid": "58770" 00:15:17.377 }, 00:15:17.377 "auth": { 00:15:17.377 "state": "completed", 00:15:17.377 "digest": "sha384", 00:15:17.377 "dhgroup": "ffdhe2048" 00:15:17.377 } 00:15:17.377 } 00:15:17.377 ]' 00:15:17.377 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.377 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.377 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.377 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:17.377 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.377 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.377 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.377 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.634 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:15:17.634 12:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:15:18.201 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.459 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:18.459 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.459 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.459 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.460 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.460 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:18.460 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:18.460 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:18.460 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.460 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:18.460 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:18.460 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:18.460 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.460 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:18.460 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.460 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.460 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.460 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:18.460 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.460 12:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.718 00:15:18.718 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.718 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.718 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.976 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.976 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.976 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.976 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.976 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.976 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.976 { 00:15:18.976 "cntlid": 63, 00:15:18.976 "qid": 0, 00:15:18.976 "state": "enabled", 00:15:18.976 "thread": "nvmf_tgt_poll_group_000", 00:15:18.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:18.976 "listen_address": { 00:15:18.976 "trtype": "RDMA", 00:15:18.976 "adrfam": "IPv4", 00:15:18.976 "traddr": "192.168.100.8", 00:15:18.976 "trsvcid": "4420" 00:15:18.976 }, 00:15:18.976 "peer_address": { 00:15:18.976 "trtype": "RDMA", 00:15:18.976 "adrfam": "IPv4", 00:15:18.976 "traddr": "192.168.100.8", 00:15:18.976 "trsvcid": "43958" 00:15:18.976 }, 00:15:18.976 "auth": { 00:15:18.976 "state": "completed", 00:15:18.976 "digest": "sha384", 00:15:18.976 "dhgroup": "ffdhe2048" 00:15:18.976 } 00:15:18.976 } 00:15:18.976 ]' 00:15:18.976 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.976 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.976 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.235 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:19.235 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.235 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.235 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.235 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.493 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:15:19.493 12:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:15:20.061 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.061 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:20.061 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.061 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.062 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.062 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:20.062 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.062 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:20.062 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:20.320 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:20.320 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.320 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:20.320 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:20.320 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:20.320 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.320 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.320 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.320 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.320 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.321 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.321 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.321 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.580 00:15:20.580 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.580 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.580 12:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.839 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.839 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.839 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.839 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.839 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.839 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.839 { 00:15:20.839 "cntlid": 65, 00:15:20.839 "qid": 0, 00:15:20.839 "state": "enabled", 00:15:20.839 "thread": "nvmf_tgt_poll_group_000", 00:15:20.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:20.839 "listen_address": { 00:15:20.839 "trtype": "RDMA", 00:15:20.839 "adrfam": "IPv4", 00:15:20.839 "traddr": "192.168.100.8", 00:15:20.839 "trsvcid": "4420" 00:15:20.839 }, 00:15:20.839 "peer_address": { 00:15:20.839 "trtype": "RDMA", 00:15:20.839 "adrfam": "IPv4", 00:15:20.839 "traddr": "192.168.100.8", 00:15:20.839 "trsvcid": "36526" 00:15:20.839 }, 00:15:20.839 "auth": { 00:15:20.839 "state": "completed", 00:15:20.839 "digest": "sha384", 00:15:20.839 "dhgroup": "ffdhe3072" 00:15:20.839 } 00:15:20.839 } 00:15:20.839 ]' 00:15:20.839 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.839 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.839 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.839 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.839 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.839 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.839 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.839 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.097 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:21.097 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:21.664 12:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.924 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:21.924 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.924 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.924 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.924 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.924 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:21.924 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:21.924 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:21.924 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.924 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:21.924 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:21.924 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:21.924 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.924 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.924 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.924 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.183 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.183 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.183 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.183 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.183 00:15:22.442 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.442 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.442 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.442 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.442 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.442 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.442 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.442 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.442 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.442 { 00:15:22.442 "cntlid": 67, 00:15:22.442 "qid": 0, 00:15:22.442 "state": "enabled", 00:15:22.442 "thread": "nvmf_tgt_poll_group_000", 00:15:22.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:22.442 "listen_address": { 00:15:22.442 "trtype": "RDMA", 00:15:22.442 "adrfam": "IPv4", 00:15:22.442 "traddr": "192.168.100.8", 00:15:22.442 "trsvcid": "4420" 00:15:22.442 }, 00:15:22.442 "peer_address": { 00:15:22.442 "trtype": "RDMA", 00:15:22.442 "adrfam": "IPv4", 00:15:22.442 "traddr": "192.168.100.8", 00:15:22.442 "trsvcid": "54350" 00:15:22.442 }, 00:15:22.442 "auth": { 00:15:22.442 "state": "completed", 00:15:22.442 "digest": "sha384", 00:15:22.442 "dhgroup": "ffdhe3072" 00:15:22.442 } 00:15:22.442 } 00:15:22.442 ]' 00:15:22.442 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.442 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.442 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.700 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.700 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.700 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.700 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.700 12:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.963 12:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:22.963 12:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:23.531 12:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.531 12:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:23.531 12:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.531 12:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.531 12:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.531 12:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.531 12:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:23.531 12:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:23.791 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:23.791 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.791 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:23.791 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:23.791 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:23.791 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.791 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.791 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.791 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.791 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.791 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.791 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.791 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.050 00:15:24.050 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.050 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.050 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.309 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.309 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.310 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.310 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.310 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.310 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.310 { 00:15:24.310 "cntlid": 69, 00:15:24.310 "qid": 0, 00:15:24.310 "state": "enabled", 00:15:24.310 "thread": "nvmf_tgt_poll_group_000", 00:15:24.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:24.310 "listen_address": { 00:15:24.310 "trtype": "RDMA", 00:15:24.310 "adrfam": "IPv4", 00:15:24.310 "traddr": "192.168.100.8", 00:15:24.310 "trsvcid": "4420" 00:15:24.310 }, 00:15:24.310 "peer_address": { 00:15:24.310 "trtype": "RDMA", 00:15:24.310 "adrfam": "IPv4", 00:15:24.310 "traddr": "192.168.100.8", 00:15:24.310 "trsvcid": "53219" 00:15:24.310 }, 00:15:24.310 "auth": { 00:15:24.310 "state": "completed", 00:15:24.310 "digest": "sha384", 00:15:24.310 "dhgroup": "ffdhe3072" 00:15:24.310 } 00:15:24.310 } 00:15:24.310 ]' 00:15:24.310 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.310 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.310 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.310 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.310 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.310 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.310 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.310 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.570 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:15:24.570 12:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:15:25.179 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.484 12:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.751 00:15:25.751 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.751 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.751 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.009 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.009 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.009 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.009 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.009 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.009 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.009 { 00:15:26.009 "cntlid": 71, 00:15:26.009 "qid": 0, 00:15:26.009 "state": "enabled", 00:15:26.009 "thread": "nvmf_tgt_poll_group_000", 00:15:26.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:26.009 "listen_address": { 00:15:26.009 "trtype": "RDMA", 00:15:26.010 "adrfam": "IPv4", 00:15:26.010 "traddr": "192.168.100.8", 00:15:26.010 "trsvcid": "4420" 00:15:26.010 }, 00:15:26.010 "peer_address": { 00:15:26.010 "trtype": "RDMA", 00:15:26.010 "adrfam": "IPv4", 00:15:26.010 "traddr": "192.168.100.8", 00:15:26.010 "trsvcid": "60082" 00:15:26.010 }, 00:15:26.010 "auth": { 00:15:26.010 "state": "completed", 00:15:26.010 "digest": "sha384", 00:15:26.010 "dhgroup": "ffdhe3072" 00:15:26.010 } 00:15:26.010 } 00:15:26.010 ]' 00:15:26.010 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.010 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.010 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.010 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:26.010 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.010 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.010 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.010 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.268 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:15:26.268 12:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:15:26.835 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.094 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:27.094 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.094 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.094 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.094 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.094 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.094 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:27.094 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:27.094 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:27.094 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.094 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:27.094 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:27.094 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:27.094 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.353 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.353 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.353 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.353 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.353 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.353 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.353 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.611 00:15:27.611 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.611 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.611 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.611 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.611 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.611 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.611 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.611 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.611 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.611 { 00:15:27.611 "cntlid": 73, 00:15:27.611 "qid": 0, 00:15:27.611 "state": "enabled", 00:15:27.611 "thread": "nvmf_tgt_poll_group_000", 00:15:27.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:27.611 "listen_address": { 00:15:27.611 "trtype": "RDMA", 00:15:27.611 "adrfam": "IPv4", 00:15:27.611 "traddr": "192.168.100.8", 00:15:27.611 "trsvcid": "4420" 00:15:27.611 }, 00:15:27.611 "peer_address": { 00:15:27.611 "trtype": "RDMA", 00:15:27.611 "adrfam": "IPv4", 00:15:27.611 "traddr": "192.168.100.8", 00:15:27.611 "trsvcid": "46678" 00:15:27.611 }, 00:15:27.611 "auth": { 00:15:27.611 "state": "completed", 00:15:27.611 "digest": "sha384", 00:15:27.611 "dhgroup": "ffdhe4096" 00:15:27.611 } 00:15:27.611 } 00:15:27.611 ]' 00:15:27.611 12:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.869 12:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.869 12:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.869 12:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:27.869 12:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.869 12:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.869 12:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.869 12:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.156 12:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:28.156 12:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:28.723 12:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.724 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:28.724 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.724 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.724 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.724 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.724 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:28.724 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:28.982 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:28.982 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.982 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:28.982 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:28.982 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:28.982 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.982 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.982 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.982 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.982 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.982 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.982 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.982 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.241 00:15:29.241 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.241 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.241 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.499 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.499 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.499 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.499 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.499 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.499 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.499 { 00:15:29.499 "cntlid": 75, 00:15:29.499 "qid": 0, 00:15:29.499 "state": "enabled", 00:15:29.499 "thread": "nvmf_tgt_poll_group_000", 00:15:29.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:29.499 "listen_address": { 00:15:29.499 "trtype": "RDMA", 00:15:29.499 "adrfam": "IPv4", 00:15:29.499 "traddr": "192.168.100.8", 00:15:29.499 "trsvcid": "4420" 00:15:29.499 }, 00:15:29.499 "peer_address": { 00:15:29.499 "trtype": "RDMA", 00:15:29.499 "adrfam": "IPv4", 00:15:29.499 "traddr": "192.168.100.8", 00:15:29.499 "trsvcid": "55755" 00:15:29.499 }, 00:15:29.499 "auth": { 00:15:29.499 "state": "completed", 00:15:29.499 "digest": "sha384", 00:15:29.499 "dhgroup": "ffdhe4096" 00:15:29.499 } 00:15:29.499 } 00:15:29.499 ]' 00:15:29.499 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.499 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.499 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.499 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:29.499 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.499 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.499 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.499 12:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.758 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:29.758 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:30.326 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.584 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:30.584 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.584 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.584 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.584 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.584 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:30.584 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:30.846 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:30.846 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.846 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:30.846 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:30.846 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:30.846 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.847 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.847 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.847 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.847 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.847 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.847 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.847 12:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.105 00:15:31.105 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.105 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.105 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.105 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.105 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.105 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.105 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.364 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.364 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.364 { 00:15:31.364 "cntlid": 77, 00:15:31.364 "qid": 0, 00:15:31.364 "state": "enabled", 00:15:31.364 "thread": "nvmf_tgt_poll_group_000", 00:15:31.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:31.364 "listen_address": { 00:15:31.364 "trtype": "RDMA", 00:15:31.364 "adrfam": "IPv4", 00:15:31.364 "traddr": "192.168.100.8", 00:15:31.364 "trsvcid": "4420" 00:15:31.364 }, 00:15:31.364 "peer_address": { 00:15:31.364 "trtype": "RDMA", 00:15:31.364 "adrfam": "IPv4", 00:15:31.364 "traddr": "192.168.100.8", 00:15:31.364 "trsvcid": "47855" 00:15:31.365 }, 00:15:31.365 "auth": { 00:15:31.365 "state": "completed", 00:15:31.365 "digest": "sha384", 00:15:31.365 "dhgroup": "ffdhe4096" 00:15:31.365 } 00:15:31.365 } 00:15:31.365 ]' 00:15:31.365 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.365 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.365 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.365 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:31.365 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.365 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.365 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.365 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.624 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:15:31.624 12:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:15:32.191 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.191 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:32.191 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.191 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.191 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.191 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.191 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:32.191 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:32.450 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:32.450 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.450 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:32.450 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:32.450 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:32.450 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.450 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:32.450 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.450 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.450 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.450 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:32.450 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.450 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.709 00:15:32.709 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.709 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.709 12:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.967 12:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.967 12:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.967 12:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.967 12:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.967 12:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.967 12:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.967 { 00:15:32.967 "cntlid": 79, 00:15:32.967 "qid": 0, 00:15:32.967 "state": "enabled", 00:15:32.967 "thread": "nvmf_tgt_poll_group_000", 00:15:32.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:32.967 "listen_address": { 00:15:32.967 "trtype": "RDMA", 00:15:32.967 "adrfam": "IPv4", 00:15:32.967 "traddr": "192.168.100.8", 00:15:32.967 "trsvcid": "4420" 00:15:32.967 }, 00:15:32.967 "peer_address": { 00:15:32.967 "trtype": "RDMA", 00:15:32.967 "adrfam": "IPv4", 00:15:32.967 "traddr": "192.168.100.8", 00:15:32.967 "trsvcid": "39978" 00:15:32.967 }, 00:15:32.967 "auth": { 00:15:32.967 "state": "completed", 00:15:32.967 "digest": "sha384", 00:15:32.967 "dhgroup": "ffdhe4096" 00:15:32.967 } 00:15:32.967 } 00:15:32.967 ]' 00:15:32.967 12:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.967 12:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.967 12:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.967 12:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:32.967 12:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.967 12:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.967 12:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.967 12:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.225 12:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:15:33.225 12:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:15:33.793 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.051 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:34.051 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.051 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.051 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.051 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.051 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.051 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.051 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.310 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:34.310 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.310 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:34.310 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:34.311 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:34.311 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.311 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.311 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.311 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.311 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.311 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.311 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.311 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.569 00:15:34.569 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.570 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.570 12:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.829 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.829 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.829 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.829 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.829 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.829 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.829 { 00:15:34.829 "cntlid": 81, 00:15:34.829 "qid": 0, 00:15:34.829 "state": "enabled", 00:15:34.829 "thread": "nvmf_tgt_poll_group_000", 00:15:34.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:34.829 "listen_address": { 00:15:34.829 "trtype": "RDMA", 00:15:34.829 "adrfam": "IPv4", 00:15:34.829 "traddr": "192.168.100.8", 00:15:34.829 "trsvcid": "4420" 00:15:34.829 }, 00:15:34.829 "peer_address": { 00:15:34.829 "trtype": "RDMA", 00:15:34.829 "adrfam": "IPv4", 00:15:34.829 "traddr": "192.168.100.8", 00:15:34.829 "trsvcid": "53120" 00:15:34.829 }, 00:15:34.829 "auth": { 00:15:34.829 "state": "completed", 00:15:34.829 "digest": "sha384", 00:15:34.829 "dhgroup": "ffdhe6144" 00:15:34.829 } 00:15:34.829 } 00:15:34.829 ]' 00:15:34.829 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.829 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.829 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.829 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:34.829 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.829 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.829 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.829 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.088 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:35.088 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:35.655 12:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.655 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:35.655 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.655 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.914 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.484 00:15:36.484 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.484 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.484 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.484 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.484 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.484 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.484 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.484 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.484 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.484 { 00:15:36.484 "cntlid": 83, 00:15:36.484 "qid": 0, 00:15:36.484 "state": "enabled", 00:15:36.484 "thread": "nvmf_tgt_poll_group_000", 00:15:36.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:36.484 "listen_address": { 00:15:36.484 "trtype": "RDMA", 00:15:36.484 "adrfam": "IPv4", 00:15:36.484 "traddr": "192.168.100.8", 00:15:36.484 "trsvcid": "4420" 00:15:36.484 }, 00:15:36.484 "peer_address": { 00:15:36.484 "trtype": "RDMA", 00:15:36.484 "adrfam": "IPv4", 00:15:36.484 "traddr": "192.168.100.8", 00:15:36.484 "trsvcid": "42700" 00:15:36.484 }, 00:15:36.484 "auth": { 00:15:36.484 "state": "completed", 00:15:36.484 "digest": "sha384", 00:15:36.484 "dhgroup": "ffdhe6144" 00:15:36.484 } 00:15:36.484 } 00:15:36.484 ]' 00:15:36.484 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.484 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.484 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.743 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:36.743 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.743 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.743 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.743 12:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.001 12:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:37.001 12:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:37.569 12:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.569 12:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:37.569 12:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.569 12:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.569 12:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.569 12:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.569 12:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:37.569 12:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:37.829 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:37.829 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.829 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:37.829 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:37.829 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:37.829 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.829 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.829 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.829 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.829 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.829 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.829 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.829 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.088 00:15:38.088 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.088 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.088 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.346 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.346 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.346 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.346 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.346 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.346 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.346 { 00:15:38.346 "cntlid": 85, 00:15:38.346 "qid": 0, 00:15:38.346 "state": "enabled", 00:15:38.346 "thread": "nvmf_tgt_poll_group_000", 00:15:38.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:38.346 "listen_address": { 00:15:38.346 "trtype": "RDMA", 00:15:38.346 "adrfam": "IPv4", 00:15:38.346 "traddr": "192.168.100.8", 00:15:38.346 "trsvcid": "4420" 00:15:38.346 }, 00:15:38.346 "peer_address": { 00:15:38.346 "trtype": "RDMA", 00:15:38.346 "adrfam": "IPv4", 00:15:38.346 "traddr": "192.168.100.8", 00:15:38.346 "trsvcid": "51606" 00:15:38.346 }, 00:15:38.346 "auth": { 00:15:38.346 "state": "completed", 00:15:38.346 "digest": "sha384", 00:15:38.346 "dhgroup": "ffdhe6144" 00:15:38.346 } 00:15:38.346 } 00:15:38.346 ]' 00:15:38.346 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.346 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.346 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.346 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:38.346 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.346 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.346 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.346 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.603 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:15:38.603 12:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:15:39.170 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.429 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:39.429 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.429 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.429 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.429 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.429 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:39.429 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:39.688 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:39.688 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.688 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:39.688 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:39.689 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:39.689 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.689 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:39.689 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.689 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.689 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.689 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:39.689 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.689 12:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.947 00:15:39.947 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.947 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.947 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.207 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.207 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.207 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.207 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.207 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.207 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.207 { 00:15:40.207 "cntlid": 87, 00:15:40.207 "qid": 0, 00:15:40.207 "state": "enabled", 00:15:40.207 "thread": "nvmf_tgt_poll_group_000", 00:15:40.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:40.207 "listen_address": { 00:15:40.207 "trtype": "RDMA", 00:15:40.207 "adrfam": "IPv4", 00:15:40.207 "traddr": "192.168.100.8", 00:15:40.207 "trsvcid": "4420" 00:15:40.207 }, 00:15:40.207 "peer_address": { 00:15:40.207 "trtype": "RDMA", 00:15:40.207 "adrfam": "IPv4", 00:15:40.207 "traddr": "192.168.100.8", 00:15:40.207 "trsvcid": "32971" 00:15:40.207 }, 00:15:40.207 "auth": { 00:15:40.207 "state": "completed", 00:15:40.207 "digest": "sha384", 00:15:40.207 "dhgroup": "ffdhe6144" 00:15:40.207 } 00:15:40.207 } 00:15:40.207 ]' 00:15:40.207 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.207 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.207 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.207 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:40.207 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.207 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.207 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.207 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.466 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:15:40.466 12:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:15:41.033 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.033 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:41.033 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.033 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.293 12:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.860 00:15:41.860 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.860 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.860 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.118 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.118 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.118 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.118 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.118 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.118 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.118 { 00:15:42.118 "cntlid": 89, 00:15:42.118 "qid": 0, 00:15:42.118 "state": "enabled", 00:15:42.118 "thread": "nvmf_tgt_poll_group_000", 00:15:42.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:42.118 "listen_address": { 00:15:42.118 "trtype": "RDMA", 00:15:42.119 "adrfam": "IPv4", 00:15:42.119 "traddr": "192.168.100.8", 00:15:42.119 "trsvcid": "4420" 00:15:42.119 }, 00:15:42.119 "peer_address": { 00:15:42.119 "trtype": "RDMA", 00:15:42.119 "adrfam": "IPv4", 00:15:42.119 "traddr": "192.168.100.8", 00:15:42.119 "trsvcid": "60359" 00:15:42.119 }, 00:15:42.119 "auth": { 00:15:42.119 "state": "completed", 00:15:42.119 "digest": "sha384", 00:15:42.119 "dhgroup": "ffdhe8192" 00:15:42.119 } 00:15:42.119 } 00:15:42.119 ]' 00:15:42.119 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.119 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.119 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.119 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:42.119 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.119 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.119 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.119 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.377 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:42.377 12:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:42.943 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.201 12:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.768 00:15:43.768 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.768 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.768 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.027 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.027 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.027 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.027 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.027 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.027 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.027 { 00:15:44.027 "cntlid": 91, 00:15:44.027 "qid": 0, 00:15:44.027 "state": "enabled", 00:15:44.027 "thread": "nvmf_tgt_poll_group_000", 00:15:44.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:44.027 "listen_address": { 00:15:44.027 "trtype": "RDMA", 00:15:44.027 "adrfam": "IPv4", 00:15:44.027 "traddr": "192.168.100.8", 00:15:44.027 "trsvcid": "4420" 00:15:44.027 }, 00:15:44.027 "peer_address": { 00:15:44.027 "trtype": "RDMA", 00:15:44.027 "adrfam": "IPv4", 00:15:44.027 "traddr": "192.168.100.8", 00:15:44.027 "trsvcid": "35431" 00:15:44.027 }, 00:15:44.027 "auth": { 00:15:44.027 "state": "completed", 00:15:44.027 "digest": "sha384", 00:15:44.027 "dhgroup": "ffdhe8192" 00:15:44.027 } 00:15:44.027 } 00:15:44.027 ]' 00:15:44.027 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.027 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.027 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.027 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:44.027 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.027 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.027 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.027 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.285 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:44.286 12:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:44.853 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.112 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:45.112 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.112 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.112 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.112 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.112 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:45.112 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:45.371 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:45.372 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.372 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.372 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:45.372 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:45.372 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.372 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.372 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.372 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.372 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.372 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.372 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.372 12:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.630 00:15:45.889 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.889 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.889 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.889 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.889 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.889 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.889 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.889 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.889 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.889 { 00:15:45.889 "cntlid": 93, 00:15:45.889 "qid": 0, 00:15:45.889 "state": "enabled", 00:15:45.889 "thread": "nvmf_tgt_poll_group_000", 00:15:45.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:45.889 "listen_address": { 00:15:45.889 "trtype": "RDMA", 00:15:45.889 "adrfam": "IPv4", 00:15:45.889 "traddr": "192.168.100.8", 00:15:45.889 "trsvcid": "4420" 00:15:45.889 }, 00:15:45.889 "peer_address": { 00:15:45.889 "trtype": "RDMA", 00:15:45.889 "adrfam": "IPv4", 00:15:45.889 "traddr": "192.168.100.8", 00:15:45.889 "trsvcid": "46766" 00:15:45.889 }, 00:15:45.889 "auth": { 00:15:45.889 "state": "completed", 00:15:45.889 "digest": "sha384", 00:15:45.889 "dhgroup": "ffdhe8192" 00:15:45.889 } 00:15:45.889 } 00:15:45.889 ]' 00:15:45.889 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.148 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.148 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.148 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:46.148 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.148 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.148 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.148 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.407 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:15:46.407 12:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:15:46.974 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.974 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:46.974 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.974 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.974 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.974 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.974 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:46.974 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:47.232 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:47.232 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.232 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.232 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:47.233 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.233 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.233 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:47.233 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.233 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.233 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.233 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.233 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.233 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.800 00:15:47.800 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.800 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.800 12:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.800 12:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.800 12:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.800 12:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.800 12:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.800 12:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.800 12:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.800 { 00:15:47.800 "cntlid": 95, 00:15:47.800 "qid": 0, 00:15:47.800 "state": "enabled", 00:15:47.800 "thread": "nvmf_tgt_poll_group_000", 00:15:47.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:47.800 "listen_address": { 00:15:47.800 "trtype": "RDMA", 00:15:47.800 "adrfam": "IPv4", 00:15:47.800 "traddr": "192.168.100.8", 00:15:47.800 "trsvcid": "4420" 00:15:47.800 }, 00:15:47.800 "peer_address": { 00:15:47.800 "trtype": "RDMA", 00:15:47.800 "adrfam": "IPv4", 00:15:47.800 "traddr": "192.168.100.8", 00:15:47.801 "trsvcid": "54607" 00:15:47.801 }, 00:15:47.801 "auth": { 00:15:47.801 "state": "completed", 00:15:47.801 "digest": "sha384", 00:15:47.801 "dhgroup": "ffdhe8192" 00:15:47.801 } 00:15:47.801 } 00:15:47.801 ]' 00:15:47.801 12:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.059 12:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.060 12:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.060 12:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:48.060 12:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.060 12:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.060 12:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.060 12:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.318 12:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:15:48.318 12:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:15:48.885 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.885 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:48.885 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.885 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.885 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.885 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:48.885 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.886 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.886 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:48.886 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:49.144 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:49.144 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.144 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:49.144 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:49.144 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.144 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.144 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.144 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.144 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.144 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.144 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.144 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.145 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.403 00:15:49.403 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.403 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.403 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.662 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.662 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.662 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.662 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.662 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.662 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.662 { 00:15:49.662 "cntlid": 97, 00:15:49.662 "qid": 0, 00:15:49.662 "state": "enabled", 00:15:49.662 "thread": "nvmf_tgt_poll_group_000", 00:15:49.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:49.662 "listen_address": { 00:15:49.662 "trtype": "RDMA", 00:15:49.662 "adrfam": "IPv4", 00:15:49.662 "traddr": "192.168.100.8", 00:15:49.662 "trsvcid": "4420" 00:15:49.662 }, 00:15:49.662 "peer_address": { 00:15:49.662 "trtype": "RDMA", 00:15:49.662 "adrfam": "IPv4", 00:15:49.662 "traddr": "192.168.100.8", 00:15:49.662 "trsvcid": "57877" 00:15:49.662 }, 00:15:49.662 "auth": { 00:15:49.662 "state": "completed", 00:15:49.662 "digest": "sha512", 00:15:49.662 "dhgroup": "null" 00:15:49.662 } 00:15:49.662 } 00:15:49.662 ]' 00:15:49.662 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.662 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.662 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.663 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.663 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.663 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.663 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.663 12:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.922 12:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:49.922 12:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:50.489 12:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.748 12:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:50.748 12:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.748 12:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.748 12:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.748 12:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.748 12:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.748 12:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.748 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:50.748 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.748 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:50.748 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:50.748 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:50.748 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.748 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.748 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.748 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.007 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.007 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.007 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.007 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.007 00:15:51.266 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.266 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.266 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.266 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.266 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.266 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.266 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.266 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.266 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.266 { 00:15:51.266 "cntlid": 99, 00:15:51.266 "qid": 0, 00:15:51.266 "state": "enabled", 00:15:51.266 "thread": "nvmf_tgt_poll_group_000", 00:15:51.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:51.266 "listen_address": { 00:15:51.266 "trtype": "RDMA", 00:15:51.266 "adrfam": "IPv4", 00:15:51.266 "traddr": "192.168.100.8", 00:15:51.266 "trsvcid": "4420" 00:15:51.266 }, 00:15:51.266 "peer_address": { 00:15:51.266 "trtype": "RDMA", 00:15:51.266 "adrfam": "IPv4", 00:15:51.266 "traddr": "192.168.100.8", 00:15:51.266 "trsvcid": "35740" 00:15:51.266 }, 00:15:51.266 "auth": { 00:15:51.266 "state": "completed", 00:15:51.266 "digest": "sha512", 00:15:51.266 "dhgroup": "null" 00:15:51.266 } 00:15:51.266 } 00:15:51.266 ]' 00:15:51.266 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.266 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.266 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.525 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:51.525 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.525 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.525 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.525 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.784 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:51.784 12:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:52.351 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.351 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:52.351 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.351 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.351 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.351 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.351 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:52.351 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:52.610 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:52.610 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.611 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:52.611 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:52.611 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:52.611 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.611 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.611 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.611 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.611 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.611 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.611 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.611 12:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.870 00:15:52.870 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.870 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.870 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.128 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.128 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.128 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.128 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.128 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.128 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.128 { 00:15:53.128 "cntlid": 101, 00:15:53.128 "qid": 0, 00:15:53.128 "state": "enabled", 00:15:53.128 "thread": "nvmf_tgt_poll_group_000", 00:15:53.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:53.128 "listen_address": { 00:15:53.128 "trtype": "RDMA", 00:15:53.128 "adrfam": "IPv4", 00:15:53.128 "traddr": "192.168.100.8", 00:15:53.128 "trsvcid": "4420" 00:15:53.128 }, 00:15:53.128 "peer_address": { 00:15:53.128 "trtype": "RDMA", 00:15:53.128 "adrfam": "IPv4", 00:15:53.128 "traddr": "192.168.100.8", 00:15:53.128 "trsvcid": "43720" 00:15:53.128 }, 00:15:53.128 "auth": { 00:15:53.128 "state": "completed", 00:15:53.128 "digest": "sha512", 00:15:53.128 "dhgroup": "null" 00:15:53.128 } 00:15:53.128 } 00:15:53.128 ]' 00:15:53.128 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.128 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.128 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.129 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:53.129 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.129 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.129 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.129 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.386 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:15:53.387 12:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:15:53.953 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.210 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.211 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.469 00:15:54.469 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.469 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.469 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.726 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.726 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.726 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.726 12:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.726 12:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.726 12:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.726 { 00:15:54.726 "cntlid": 103, 00:15:54.726 "qid": 0, 00:15:54.726 "state": "enabled", 00:15:54.726 "thread": "nvmf_tgt_poll_group_000", 00:15:54.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:54.726 "listen_address": { 00:15:54.726 "trtype": "RDMA", 00:15:54.726 "adrfam": "IPv4", 00:15:54.726 "traddr": "192.168.100.8", 00:15:54.726 "trsvcid": "4420" 00:15:54.726 }, 00:15:54.726 "peer_address": { 00:15:54.726 "trtype": "RDMA", 00:15:54.726 "adrfam": "IPv4", 00:15:54.726 "traddr": "192.168.100.8", 00:15:54.726 "trsvcid": "54689" 00:15:54.726 }, 00:15:54.726 "auth": { 00:15:54.726 "state": "completed", 00:15:54.726 "digest": "sha512", 00:15:54.726 "dhgroup": "null" 00:15:54.726 } 00:15:54.726 } 00:15:54.726 ]' 00:15:54.726 12:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.726 12:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:54.726 12:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.726 12:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:54.727 12:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.984 12:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.984 12:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.984 12:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.984 12:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:15:54.984 12:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:15:55.917 12:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.917 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.175 00:15:56.175 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.175 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.175 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.433 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.433 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.433 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.433 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.433 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.433 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.433 { 00:15:56.433 "cntlid": 105, 00:15:56.433 "qid": 0, 00:15:56.433 "state": "enabled", 00:15:56.433 "thread": "nvmf_tgt_poll_group_000", 00:15:56.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:56.433 "listen_address": { 00:15:56.433 "trtype": "RDMA", 00:15:56.433 "adrfam": "IPv4", 00:15:56.433 "traddr": "192.168.100.8", 00:15:56.433 "trsvcid": "4420" 00:15:56.433 }, 00:15:56.433 "peer_address": { 00:15:56.433 "trtype": "RDMA", 00:15:56.433 "adrfam": "IPv4", 00:15:56.433 "traddr": "192.168.100.8", 00:15:56.433 "trsvcid": "52057" 00:15:56.433 }, 00:15:56.433 "auth": { 00:15:56.433 "state": "completed", 00:15:56.433 "digest": "sha512", 00:15:56.433 "dhgroup": "ffdhe2048" 00:15:56.433 } 00:15:56.433 } 00:15:56.433 ]' 00:15:56.433 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.433 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:56.433 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.691 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:56.691 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.691 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.691 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.691 12:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.691 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:56.691 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:15:57.257 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.516 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:57.516 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.516 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.516 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.516 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.516 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.516 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.775 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:57.775 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.775 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:57.775 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:57.775 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:57.775 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.775 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.775 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.775 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.775 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.775 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.775 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.775 12:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.033 00:15:58.033 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.033 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.033 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.033 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.033 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.033 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.033 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.292 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.292 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.292 { 00:15:58.292 "cntlid": 107, 00:15:58.292 "qid": 0, 00:15:58.292 "state": "enabled", 00:15:58.292 "thread": "nvmf_tgt_poll_group_000", 00:15:58.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:58.292 "listen_address": { 00:15:58.292 "trtype": "RDMA", 00:15:58.292 "adrfam": "IPv4", 00:15:58.292 "traddr": "192.168.100.8", 00:15:58.292 "trsvcid": "4420" 00:15:58.292 }, 00:15:58.292 "peer_address": { 00:15:58.292 "trtype": "RDMA", 00:15:58.292 "adrfam": "IPv4", 00:15:58.292 "traddr": "192.168.100.8", 00:15:58.292 "trsvcid": "42645" 00:15:58.292 }, 00:15:58.292 "auth": { 00:15:58.292 "state": "completed", 00:15:58.292 "digest": "sha512", 00:15:58.292 "dhgroup": "ffdhe2048" 00:15:58.292 } 00:15:58.292 } 00:15:58.292 ]' 00:15:58.292 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.292 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.292 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.292 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.292 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.292 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.292 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.292 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.550 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:58.551 12:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:15:59.118 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.118 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:59.118 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.118 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.118 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.118 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.118 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.118 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.377 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:15:59.377 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.377 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:59.377 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:59.377 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:59.377 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.377 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.377 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.377 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.377 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.377 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.377 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.377 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.636 00:15:59.636 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.636 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.636 12:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.894 12:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.894 12:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.894 12:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.894 12:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.895 12:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.895 12:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.895 { 00:15:59.895 "cntlid": 109, 00:15:59.895 "qid": 0, 00:15:59.895 "state": "enabled", 00:15:59.895 "thread": "nvmf_tgt_poll_group_000", 00:15:59.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:59.895 "listen_address": { 00:15:59.895 "trtype": "RDMA", 00:15:59.895 "adrfam": "IPv4", 00:15:59.895 "traddr": "192.168.100.8", 00:15:59.895 "trsvcid": "4420" 00:15:59.895 }, 00:15:59.895 "peer_address": { 00:15:59.895 "trtype": "RDMA", 00:15:59.895 "adrfam": "IPv4", 00:15:59.895 "traddr": "192.168.100.8", 00:15:59.895 "trsvcid": "34131" 00:15:59.895 }, 00:15:59.895 "auth": { 00:15:59.895 "state": "completed", 00:15:59.895 "digest": "sha512", 00:15:59.895 "dhgroup": "ffdhe2048" 00:15:59.895 } 00:15:59.895 } 00:15:59.895 ]' 00:15:59.895 12:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.895 12:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.895 12:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.895 12:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:59.895 12:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.895 12:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.895 12:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.895 12:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.153 12:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:16:00.153 12:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:16:00.721 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.980 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:00.980 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.980 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.980 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.980 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.980 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:00.980 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:01.239 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:01.239 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.239 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:01.239 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:01.239 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:01.239 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.239 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:01.239 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.239 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.239 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.239 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:01.239 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.239 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.239 00:16:01.239 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.239 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.239 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.498 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.498 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.498 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.498 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.498 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.498 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.498 { 00:16:01.498 "cntlid": 111, 00:16:01.498 "qid": 0, 00:16:01.498 "state": "enabled", 00:16:01.498 "thread": "nvmf_tgt_poll_group_000", 00:16:01.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:01.498 "listen_address": { 00:16:01.498 "trtype": "RDMA", 00:16:01.498 "adrfam": "IPv4", 00:16:01.498 "traddr": "192.168.100.8", 00:16:01.498 "trsvcid": "4420" 00:16:01.498 }, 00:16:01.498 "peer_address": { 00:16:01.498 "trtype": "RDMA", 00:16:01.498 "adrfam": "IPv4", 00:16:01.498 "traddr": "192.168.100.8", 00:16:01.498 "trsvcid": "41835" 00:16:01.498 }, 00:16:01.498 "auth": { 00:16:01.498 "state": "completed", 00:16:01.498 "digest": "sha512", 00:16:01.498 "dhgroup": "ffdhe2048" 00:16:01.498 } 00:16:01.498 } 00:16:01.498 ]' 00:16:01.498 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.498 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.498 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.756 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:01.756 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.756 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.756 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.756 12:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.757 12:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:16:01.757 12:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:16:02.692 12:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.692 12:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:02.692 12:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.692 12:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.692 12:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.692 12:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.692 12:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.692 12:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:02.692 12:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:02.692 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:02.692 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.692 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:02.692 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:02.692 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:02.692 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.692 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.692 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.692 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.692 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.692 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.692 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.692 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.955 00:16:02.955 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.955 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.955 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.211 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.211 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.211 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.211 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.211 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.211 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.211 { 00:16:03.211 "cntlid": 113, 00:16:03.211 "qid": 0, 00:16:03.211 "state": "enabled", 00:16:03.211 "thread": "nvmf_tgt_poll_group_000", 00:16:03.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:03.211 "listen_address": { 00:16:03.211 "trtype": "RDMA", 00:16:03.211 "adrfam": "IPv4", 00:16:03.211 "traddr": "192.168.100.8", 00:16:03.211 "trsvcid": "4420" 00:16:03.211 }, 00:16:03.211 "peer_address": { 00:16:03.211 "trtype": "RDMA", 00:16:03.211 "adrfam": "IPv4", 00:16:03.211 "traddr": "192.168.100.8", 00:16:03.211 "trsvcid": "42989" 00:16:03.211 }, 00:16:03.211 "auth": { 00:16:03.211 "state": "completed", 00:16:03.211 "digest": "sha512", 00:16:03.211 "dhgroup": "ffdhe3072" 00:16:03.211 } 00:16:03.211 } 00:16:03.211 ]' 00:16:03.211 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.211 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.211 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.468 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:03.468 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.468 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.468 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.468 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.726 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:16:03.726 12:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:16:04.291 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.291 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:04.291 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.291 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.291 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.291 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.291 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:04.291 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:04.549 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:04.549 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.549 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:04.549 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:04.549 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:04.549 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.549 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.549 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.549 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.549 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.549 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.549 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.549 12:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.813 00:16:04.813 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.813 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.813 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.072 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.072 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.072 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.072 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.072 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.072 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.072 { 00:16:05.072 "cntlid": 115, 00:16:05.072 "qid": 0, 00:16:05.072 "state": "enabled", 00:16:05.072 "thread": "nvmf_tgt_poll_group_000", 00:16:05.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:05.072 "listen_address": { 00:16:05.072 "trtype": "RDMA", 00:16:05.072 "adrfam": "IPv4", 00:16:05.072 "traddr": "192.168.100.8", 00:16:05.072 "trsvcid": "4420" 00:16:05.072 }, 00:16:05.072 "peer_address": { 00:16:05.072 "trtype": "RDMA", 00:16:05.072 "adrfam": "IPv4", 00:16:05.072 "traddr": "192.168.100.8", 00:16:05.072 "trsvcid": "52226" 00:16:05.072 }, 00:16:05.072 "auth": { 00:16:05.072 "state": "completed", 00:16:05.072 "digest": "sha512", 00:16:05.072 "dhgroup": "ffdhe3072" 00:16:05.072 } 00:16:05.072 } 00:16:05.072 ]' 00:16:05.072 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.072 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.072 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.072 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.072 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.072 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.072 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.072 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.330 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:16:05.330 12:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:16:05.895 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.153 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:06.153 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.153 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.153 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.153 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.153 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:06.153 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:06.153 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:06.154 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.412 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:06.412 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:06.412 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:06.412 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.412 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.412 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.412 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.412 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.412 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.412 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.412 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.670 00:16:06.670 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.670 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.670 12:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.670 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.670 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.670 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.670 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.670 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.670 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.670 { 00:16:06.670 "cntlid": 117, 00:16:06.670 "qid": 0, 00:16:06.670 "state": "enabled", 00:16:06.670 "thread": "nvmf_tgt_poll_group_000", 00:16:06.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:06.670 "listen_address": { 00:16:06.670 "trtype": "RDMA", 00:16:06.670 "adrfam": "IPv4", 00:16:06.670 "traddr": "192.168.100.8", 00:16:06.670 "trsvcid": "4420" 00:16:06.670 }, 00:16:06.670 "peer_address": { 00:16:06.670 "trtype": "RDMA", 00:16:06.670 "adrfam": "IPv4", 00:16:06.670 "traddr": "192.168.100.8", 00:16:06.670 "trsvcid": "52283" 00:16:06.670 }, 00:16:06.670 "auth": { 00:16:06.670 "state": "completed", 00:16:06.670 "digest": "sha512", 00:16:06.670 "dhgroup": "ffdhe3072" 00:16:06.670 } 00:16:06.670 } 00:16:06.670 ]' 00:16:06.670 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.928 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.928 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.928 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:06.928 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.928 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.928 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.928 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.186 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:16:07.186 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:16:07.751 12:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.751 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:07.751 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.751 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.751 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.751 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.751 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:07.751 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:08.009 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:08.009 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.009 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:08.009 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:08.009 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:08.009 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.009 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:08.009 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.009 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.009 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.009 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:08.009 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.009 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.266 00:16:08.266 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.266 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.266 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.524 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.524 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.524 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.524 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.524 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.524 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.524 { 00:16:08.524 "cntlid": 119, 00:16:08.524 "qid": 0, 00:16:08.524 "state": "enabled", 00:16:08.524 "thread": "nvmf_tgt_poll_group_000", 00:16:08.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:08.524 "listen_address": { 00:16:08.524 "trtype": "RDMA", 00:16:08.524 "adrfam": "IPv4", 00:16:08.524 "traddr": "192.168.100.8", 00:16:08.524 "trsvcid": "4420" 00:16:08.524 }, 00:16:08.524 "peer_address": { 00:16:08.524 "trtype": "RDMA", 00:16:08.524 "adrfam": "IPv4", 00:16:08.524 "traddr": "192.168.100.8", 00:16:08.524 "trsvcid": "52444" 00:16:08.524 }, 00:16:08.524 "auth": { 00:16:08.524 "state": "completed", 00:16:08.524 "digest": "sha512", 00:16:08.524 "dhgroup": "ffdhe3072" 00:16:08.524 } 00:16:08.524 } 00:16:08.524 ]' 00:16:08.524 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.524 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.524 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.524 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:08.524 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.782 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.782 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.782 12:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.782 12:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:16:08.782 12:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:16:09.348 12:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.605 12:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:09.605 12:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.605 12:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.605 12:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.605 12:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.605 12:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.605 12:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:09.605 12:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:09.862 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:09.862 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.863 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:09.863 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:09.863 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:09.863 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.863 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.863 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.863 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.863 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.863 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.863 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.863 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.120 00:16:10.120 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.120 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.120 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.377 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.377 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.377 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.377 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.377 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.377 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.377 { 00:16:10.377 "cntlid": 121, 00:16:10.377 "qid": 0, 00:16:10.377 "state": "enabled", 00:16:10.377 "thread": "nvmf_tgt_poll_group_000", 00:16:10.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:10.377 "listen_address": { 00:16:10.377 "trtype": "RDMA", 00:16:10.377 "adrfam": "IPv4", 00:16:10.377 "traddr": "192.168.100.8", 00:16:10.377 "trsvcid": "4420" 00:16:10.377 }, 00:16:10.377 "peer_address": { 00:16:10.377 "trtype": "RDMA", 00:16:10.377 "adrfam": "IPv4", 00:16:10.377 "traddr": "192.168.100.8", 00:16:10.377 "trsvcid": "36068" 00:16:10.377 }, 00:16:10.377 "auth": { 00:16:10.378 "state": "completed", 00:16:10.378 "digest": "sha512", 00:16:10.378 "dhgroup": "ffdhe4096" 00:16:10.378 } 00:16:10.378 } 00:16:10.378 ]' 00:16:10.378 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.378 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.378 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.378 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:10.378 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.378 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.378 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.378 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.634 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:16:10.634 12:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:16:11.199 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.456 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:11.456 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.456 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.456 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.456 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.456 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:11.456 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:11.456 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:11.456 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.456 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:11.456 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:11.456 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:11.457 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.457 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.457 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.457 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.457 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.457 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.457 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.457 12:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.021 00:16:12.021 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.021 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.021 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.021 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.021 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.021 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.021 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.021 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.021 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.021 { 00:16:12.021 "cntlid": 123, 00:16:12.021 "qid": 0, 00:16:12.021 "state": "enabled", 00:16:12.021 "thread": "nvmf_tgt_poll_group_000", 00:16:12.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:12.021 "listen_address": { 00:16:12.021 "trtype": "RDMA", 00:16:12.021 "adrfam": "IPv4", 00:16:12.021 "traddr": "192.168.100.8", 00:16:12.021 "trsvcid": "4420" 00:16:12.021 }, 00:16:12.021 "peer_address": { 00:16:12.021 "trtype": "RDMA", 00:16:12.021 "adrfam": "IPv4", 00:16:12.021 "traddr": "192.168.100.8", 00:16:12.021 "trsvcid": "54566" 00:16:12.021 }, 00:16:12.021 "auth": { 00:16:12.022 "state": "completed", 00:16:12.022 "digest": "sha512", 00:16:12.022 "dhgroup": "ffdhe4096" 00:16:12.022 } 00:16:12.022 } 00:16:12.022 ]' 00:16:12.022 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.022 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.022 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.279 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:12.279 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.279 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.279 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.280 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.280 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:16:12.280 12:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:16:13.346 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.346 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:13.346 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.346 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.346 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.346 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.346 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:13.346 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:13.346 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:13.346 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.347 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:13.347 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:13.347 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:13.347 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.347 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.347 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.347 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.347 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.347 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.347 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.347 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.678 00:16:13.678 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.678 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.678 12:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.945 12:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.945 12:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.945 12:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.945 12:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.945 12:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.945 12:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.945 { 00:16:13.945 "cntlid": 125, 00:16:13.945 "qid": 0, 00:16:13.945 "state": "enabled", 00:16:13.945 "thread": "nvmf_tgt_poll_group_000", 00:16:13.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:13.945 "listen_address": { 00:16:13.945 "trtype": "RDMA", 00:16:13.945 "adrfam": "IPv4", 00:16:13.945 "traddr": "192.168.100.8", 00:16:13.945 "trsvcid": "4420" 00:16:13.945 }, 00:16:13.945 "peer_address": { 00:16:13.945 "trtype": "RDMA", 00:16:13.945 "adrfam": "IPv4", 00:16:13.945 "traddr": "192.168.100.8", 00:16:13.945 "trsvcid": "47506" 00:16:13.945 }, 00:16:13.945 "auth": { 00:16:13.945 "state": "completed", 00:16:13.945 "digest": "sha512", 00:16:13.945 "dhgroup": "ffdhe4096" 00:16:13.945 } 00:16:13.945 } 00:16:13.945 ]' 00:16:13.945 12:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.945 12:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.945 12:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.945 12:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:13.945 12:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.945 12:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.945 12:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.945 12:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.203 12:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:16:14.203 12:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:16:14.769 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.027 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.285 00:16:15.285 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.285 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.285 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.543 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.543 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.543 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.543 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.543 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.543 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.543 { 00:16:15.543 "cntlid": 127, 00:16:15.543 "qid": 0, 00:16:15.543 "state": "enabled", 00:16:15.543 "thread": "nvmf_tgt_poll_group_000", 00:16:15.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:15.543 "listen_address": { 00:16:15.543 "trtype": "RDMA", 00:16:15.543 "adrfam": "IPv4", 00:16:15.543 "traddr": "192.168.100.8", 00:16:15.543 "trsvcid": "4420" 00:16:15.543 }, 00:16:15.543 "peer_address": { 00:16:15.543 "trtype": "RDMA", 00:16:15.543 "adrfam": "IPv4", 00:16:15.543 "traddr": "192.168.100.8", 00:16:15.543 "trsvcid": "54906" 00:16:15.543 }, 00:16:15.543 "auth": { 00:16:15.543 "state": "completed", 00:16:15.543 "digest": "sha512", 00:16:15.543 "dhgroup": "ffdhe4096" 00:16:15.543 } 00:16:15.543 } 00:16:15.543 ]' 00:16:15.543 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.543 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.543 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.802 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.802 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.802 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.802 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.802 12:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.062 12:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:16:16.062 12:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:16:16.628 12:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.628 12:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:16.628 12:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.628 12:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.628 12:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.628 12:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.628 12:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.628 12:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:16.628 12:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:16.886 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:16.886 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.886 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:16.886 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:16.886 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:16.886 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.886 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.886 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.886 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.886 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.886 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.886 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.886 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.145 00:16:17.145 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.145 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.145 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.403 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.403 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.403 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.403 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.403 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.403 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.403 { 00:16:17.403 "cntlid": 129, 00:16:17.403 "qid": 0, 00:16:17.403 "state": "enabled", 00:16:17.403 "thread": "nvmf_tgt_poll_group_000", 00:16:17.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:17.403 "listen_address": { 00:16:17.403 "trtype": "RDMA", 00:16:17.403 "adrfam": "IPv4", 00:16:17.403 "traddr": "192.168.100.8", 00:16:17.403 "trsvcid": "4420" 00:16:17.403 }, 00:16:17.403 "peer_address": { 00:16:17.403 "trtype": "RDMA", 00:16:17.403 "adrfam": "IPv4", 00:16:17.403 "traddr": "192.168.100.8", 00:16:17.403 "trsvcid": "42672" 00:16:17.403 }, 00:16:17.403 "auth": { 00:16:17.403 "state": "completed", 00:16:17.403 "digest": "sha512", 00:16:17.403 "dhgroup": "ffdhe6144" 00:16:17.403 } 00:16:17.403 } 00:16:17.403 ]' 00:16:17.403 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.403 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.403 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.403 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:17.403 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.662 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.662 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.662 12:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.662 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:16:17.662 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.597 12:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.164 00:16:19.164 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.164 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.164 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.164 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.164 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.164 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.164 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.164 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.164 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.164 { 00:16:19.164 "cntlid": 131, 00:16:19.164 "qid": 0, 00:16:19.164 "state": "enabled", 00:16:19.164 "thread": "nvmf_tgt_poll_group_000", 00:16:19.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:19.164 "listen_address": { 00:16:19.164 "trtype": "RDMA", 00:16:19.164 "adrfam": "IPv4", 00:16:19.164 "traddr": "192.168.100.8", 00:16:19.164 "trsvcid": "4420" 00:16:19.164 }, 00:16:19.164 "peer_address": { 00:16:19.164 "trtype": "RDMA", 00:16:19.164 "adrfam": "IPv4", 00:16:19.164 "traddr": "192.168.100.8", 00:16:19.164 "trsvcid": "43749" 00:16:19.164 }, 00:16:19.164 "auth": { 00:16:19.164 "state": "completed", 00:16:19.164 "digest": "sha512", 00:16:19.164 "dhgroup": "ffdhe6144" 00:16:19.164 } 00:16:19.164 } 00:16:19.164 ]' 00:16:19.164 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.423 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.423 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.423 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:19.423 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.423 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.423 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.423 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.681 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:16:19.681 12:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:16:20.247 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.247 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:20.247 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.247 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.247 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.247 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.247 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:20.247 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:20.506 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:20.506 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.506 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:20.506 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:20.506 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:20.506 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.506 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.506 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.506 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.506 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.506 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.506 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.506 12:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.764 00:16:20.764 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.764 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.764 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.022 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.022 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.022 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.022 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.022 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.022 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.022 { 00:16:21.022 "cntlid": 133, 00:16:21.022 "qid": 0, 00:16:21.022 "state": "enabled", 00:16:21.022 "thread": "nvmf_tgt_poll_group_000", 00:16:21.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:21.022 "listen_address": { 00:16:21.022 "trtype": "RDMA", 00:16:21.022 "adrfam": "IPv4", 00:16:21.022 "traddr": "192.168.100.8", 00:16:21.022 "trsvcid": "4420" 00:16:21.022 }, 00:16:21.022 "peer_address": { 00:16:21.022 "trtype": "RDMA", 00:16:21.022 "adrfam": "IPv4", 00:16:21.022 "traddr": "192.168.100.8", 00:16:21.022 "trsvcid": "49851" 00:16:21.022 }, 00:16:21.022 "auth": { 00:16:21.022 "state": "completed", 00:16:21.022 "digest": "sha512", 00:16:21.022 "dhgroup": "ffdhe6144" 00:16:21.022 } 00:16:21.022 } 00:16:21.022 ]' 00:16:21.022 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.022 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.022 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.022 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:21.022 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.280 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.280 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.280 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.280 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:16:21.280 12:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:16:21.847 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.105 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:22.105 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.105 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.105 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.105 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.105 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:22.105 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:22.363 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:22.363 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.363 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.363 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:22.363 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:22.363 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.363 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:22.364 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.364 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.364 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.364 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:22.364 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.364 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.621 00:16:22.621 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.621 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.621 12:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.880 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.880 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.880 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.880 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.880 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.880 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.880 { 00:16:22.880 "cntlid": 135, 00:16:22.880 "qid": 0, 00:16:22.880 "state": "enabled", 00:16:22.880 "thread": "nvmf_tgt_poll_group_000", 00:16:22.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:22.880 "listen_address": { 00:16:22.880 "trtype": "RDMA", 00:16:22.880 "adrfam": "IPv4", 00:16:22.880 "traddr": "192.168.100.8", 00:16:22.880 "trsvcid": "4420" 00:16:22.880 }, 00:16:22.880 "peer_address": { 00:16:22.880 "trtype": "RDMA", 00:16:22.880 "adrfam": "IPv4", 00:16:22.880 "traddr": "192.168.100.8", 00:16:22.880 "trsvcid": "52931" 00:16:22.880 }, 00:16:22.880 "auth": { 00:16:22.880 "state": "completed", 00:16:22.880 "digest": "sha512", 00:16:22.880 "dhgroup": "ffdhe6144" 00:16:22.880 } 00:16:22.880 } 00:16:22.880 ]' 00:16:22.880 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.880 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.880 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.880 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:22.880 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.880 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.880 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.880 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.138 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:16:23.138 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:16:23.703 12:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.963 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.530 00:16:24.530 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.530 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.530 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.789 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.789 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.789 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.789 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.789 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.789 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.789 { 00:16:24.789 "cntlid": 137, 00:16:24.789 "qid": 0, 00:16:24.789 "state": "enabled", 00:16:24.789 "thread": "nvmf_tgt_poll_group_000", 00:16:24.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:24.789 "listen_address": { 00:16:24.789 "trtype": "RDMA", 00:16:24.789 "adrfam": "IPv4", 00:16:24.789 "traddr": "192.168.100.8", 00:16:24.789 "trsvcid": "4420" 00:16:24.789 }, 00:16:24.789 "peer_address": { 00:16:24.789 "trtype": "RDMA", 00:16:24.789 "adrfam": "IPv4", 00:16:24.789 "traddr": "192.168.100.8", 00:16:24.789 "trsvcid": "38655" 00:16:24.789 }, 00:16:24.789 "auth": { 00:16:24.789 "state": "completed", 00:16:24.789 "digest": "sha512", 00:16:24.789 "dhgroup": "ffdhe8192" 00:16:24.789 } 00:16:24.789 } 00:16:24.789 ]' 00:16:24.789 12:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.789 12:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.789 12:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.789 12:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.789 12:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.789 12:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.789 12:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.789 12:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.047 12:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:16:25.047 12:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:16:25.613 12:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.871 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.437 00:16:26.437 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.437 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.437 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.696 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.696 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.696 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.696 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.696 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.696 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.696 { 00:16:26.696 "cntlid": 139, 00:16:26.696 "qid": 0, 00:16:26.696 "state": "enabled", 00:16:26.696 "thread": "nvmf_tgt_poll_group_000", 00:16:26.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:26.696 "listen_address": { 00:16:26.696 "trtype": "RDMA", 00:16:26.696 "adrfam": "IPv4", 00:16:26.696 "traddr": "192.168.100.8", 00:16:26.696 "trsvcid": "4420" 00:16:26.696 }, 00:16:26.696 "peer_address": { 00:16:26.696 "trtype": "RDMA", 00:16:26.696 "adrfam": "IPv4", 00:16:26.696 "traddr": "192.168.100.8", 00:16:26.696 "trsvcid": "33308" 00:16:26.696 }, 00:16:26.696 "auth": { 00:16:26.696 "state": "completed", 00:16:26.696 "digest": "sha512", 00:16:26.696 "dhgroup": "ffdhe8192" 00:16:26.696 } 00:16:26.696 } 00:16:26.696 ]' 00:16:26.696 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.696 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.696 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.696 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:26.696 12:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.696 12:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.696 12:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.696 12:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.953 12:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:16:26.954 12:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: --dhchap-ctrl-secret DHHC-1:02:OWYyN2UzMTc0YmQ1NDE2NzEzNzBjMmUyYzE2YTExYTQ1ZjdjN2M3YWNkZmM0OTMw2mZqew==: 00:16:27.519 12:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.777 12:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:27.777 12:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.777 12:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.777 12:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.777 12:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.777 12:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:27.777 12:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:28.035 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:28.035 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.035 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:28.035 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:28.035 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:28.035 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.035 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.035 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.035 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.035 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.035 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.035 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.035 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.294 00:16:28.294 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.294 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.294 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.553 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.553 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.553 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.553 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.553 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.553 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.553 { 00:16:28.553 "cntlid": 141, 00:16:28.553 "qid": 0, 00:16:28.553 "state": "enabled", 00:16:28.553 "thread": "nvmf_tgt_poll_group_000", 00:16:28.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:28.553 "listen_address": { 00:16:28.553 "trtype": "RDMA", 00:16:28.553 "adrfam": "IPv4", 00:16:28.553 "traddr": "192.168.100.8", 00:16:28.553 "trsvcid": "4420" 00:16:28.553 }, 00:16:28.553 "peer_address": { 00:16:28.553 "trtype": "RDMA", 00:16:28.553 "adrfam": "IPv4", 00:16:28.553 "traddr": "192.168.100.8", 00:16:28.553 "trsvcid": "37005" 00:16:28.553 }, 00:16:28.553 "auth": { 00:16:28.553 "state": "completed", 00:16:28.553 "digest": "sha512", 00:16:28.553 "dhgroup": "ffdhe8192" 00:16:28.553 } 00:16:28.553 } 00:16:28.553 ]' 00:16:28.553 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.553 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.553 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.553 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:28.553 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.812 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.812 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.812 12:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.812 12:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:16:28.812 12:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:01:N2Q1MDM1MDJmYmY2MjNjMGE4OWNlYTE3YmQ2YTY4ZGQVIxbN: 00:16:29.747 12:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.747 12:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:29.747 12:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.747 12:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.747 12:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.747 12:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.747 12:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:29.747 12:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:29.747 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:29.747 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.747 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.747 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:29.747 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:29.747 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.747 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:29.747 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.747 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.747 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.747 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:29.747 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.747 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.315 00:16:30.315 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.315 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.315 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.573 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.573 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.573 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.573 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.573 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.573 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.573 { 00:16:30.573 "cntlid": 143, 00:16:30.573 "qid": 0, 00:16:30.573 "state": "enabled", 00:16:30.573 "thread": "nvmf_tgt_poll_group_000", 00:16:30.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:30.573 "listen_address": { 00:16:30.573 "trtype": "RDMA", 00:16:30.573 "adrfam": "IPv4", 00:16:30.573 "traddr": "192.168.100.8", 00:16:30.573 "trsvcid": "4420" 00:16:30.573 }, 00:16:30.573 "peer_address": { 00:16:30.573 "trtype": "RDMA", 00:16:30.573 "adrfam": "IPv4", 00:16:30.573 "traddr": "192.168.100.8", 00:16:30.573 "trsvcid": "33458" 00:16:30.573 }, 00:16:30.573 "auth": { 00:16:30.573 "state": "completed", 00:16:30.573 "digest": "sha512", 00:16:30.573 "dhgroup": "ffdhe8192" 00:16:30.573 } 00:16:30.573 } 00:16:30.573 ]' 00:16:30.573 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.573 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.573 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.573 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:30.573 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.574 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.574 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.574 12:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.832 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:16:30.832 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:16:31.398 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.657 12:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.657 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.657 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.657 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.657 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.223 00:16:32.223 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.223 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.223 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.481 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.481 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.481 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.481 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.481 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.481 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.481 { 00:16:32.481 "cntlid": 145, 00:16:32.481 "qid": 0, 00:16:32.481 "state": "enabled", 00:16:32.481 "thread": "nvmf_tgt_poll_group_000", 00:16:32.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:32.481 "listen_address": { 00:16:32.482 "trtype": "RDMA", 00:16:32.482 "adrfam": "IPv4", 00:16:32.482 "traddr": "192.168.100.8", 00:16:32.482 "trsvcid": "4420" 00:16:32.482 }, 00:16:32.482 "peer_address": { 00:16:32.482 "trtype": "RDMA", 00:16:32.482 "adrfam": "IPv4", 00:16:32.482 "traddr": "192.168.100.8", 00:16:32.482 "trsvcid": "47335" 00:16:32.482 }, 00:16:32.482 "auth": { 00:16:32.482 "state": "completed", 00:16:32.482 "digest": "sha512", 00:16:32.482 "dhgroup": "ffdhe8192" 00:16:32.482 } 00:16:32.482 } 00:16:32.482 ]' 00:16:32.482 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.482 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.482 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.482 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:32.482 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.482 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.482 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.482 12:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.741 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:16:32.741 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjQ2OWIxZDBhZTIwNDlhMGM4MzM5YmYxNjE0ZDNhNWNhOGMxYTVkODA1N2MyZGIzTEOjgw==: --dhchap-ctrl-secret DHHC-1:03:ODFkNjYzNTMxMWI1MmRiYTFjMDZhZmQyYzQwYzcwMWFhZjU3YzgyYTc0Yjc4NzExZWFiMzc4MGQwZDVhMmM0M/bTaHc=: 00:16:33.310 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:33.569 12:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:33.827 request: 00:16:33.827 { 00:16:33.827 "name": "nvme0", 00:16:33.827 "trtype": "rdma", 00:16:33.827 "traddr": "192.168.100.8", 00:16:33.827 "adrfam": "ipv4", 00:16:33.827 "trsvcid": "4420", 00:16:33.827 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:33.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:33.827 "prchk_reftag": false, 00:16:33.827 "prchk_guard": false, 00:16:33.827 "hdgst": false, 00:16:33.827 "ddgst": false, 00:16:33.827 "dhchap_key": "key2", 00:16:33.827 "allow_unrecognized_csi": false, 00:16:33.827 "method": "bdev_nvme_attach_controller", 00:16:33.827 "req_id": 1 00:16:33.827 } 00:16:33.827 Got JSON-RPC error response 00:16:33.827 response: 00:16:33.827 { 00:16:33.827 "code": -5, 00:16:33.827 "message": "Input/output error" 00:16:33.827 } 00:16:33.827 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:33.827 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:33.827 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:33.827 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:33.827 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:33.827 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.828 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.828 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.828 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.828 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.828 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.828 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.828 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:34.086 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:34.086 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:34.086 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:34.086 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.086 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:34.086 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.086 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:34.086 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:34.086 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:34.344 request: 00:16:34.344 { 00:16:34.344 "name": "nvme0", 00:16:34.344 "trtype": "rdma", 00:16:34.344 "traddr": "192.168.100.8", 00:16:34.344 "adrfam": "ipv4", 00:16:34.344 "trsvcid": "4420", 00:16:34.344 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:34.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:34.344 "prchk_reftag": false, 00:16:34.344 "prchk_guard": false, 00:16:34.344 "hdgst": false, 00:16:34.344 "ddgst": false, 00:16:34.344 "dhchap_key": "key1", 00:16:34.344 "dhchap_ctrlr_key": "ckey2", 00:16:34.344 "allow_unrecognized_csi": false, 00:16:34.344 "method": "bdev_nvme_attach_controller", 00:16:34.344 "req_id": 1 00:16:34.344 } 00:16:34.344 Got JSON-RPC error response 00:16:34.344 response: 00:16:34.344 { 00:16:34.344 "code": -5, 00:16:34.344 "message": "Input/output error" 00:16:34.344 } 00:16:34.344 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:34.344 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:34.344 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:34.344 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:34.344 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:34.344 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.344 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.345 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.345 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:16:34.345 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.345 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.345 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.345 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.345 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:34.345 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.345 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:34.345 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.345 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:34.345 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.345 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.345 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.345 12:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.911 request: 00:16:34.911 { 00:16:34.911 "name": "nvme0", 00:16:34.911 "trtype": "rdma", 00:16:34.911 "traddr": "192.168.100.8", 00:16:34.911 "adrfam": "ipv4", 00:16:34.911 "trsvcid": "4420", 00:16:34.911 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:34.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:34.911 "prchk_reftag": false, 00:16:34.911 "prchk_guard": false, 00:16:34.911 "hdgst": false, 00:16:34.911 "ddgst": false, 00:16:34.911 "dhchap_key": "key1", 00:16:34.911 "dhchap_ctrlr_key": "ckey1", 00:16:34.911 "allow_unrecognized_csi": false, 00:16:34.911 "method": "bdev_nvme_attach_controller", 00:16:34.911 "req_id": 1 00:16:34.911 } 00:16:34.911 Got JSON-RPC error response 00:16:34.911 response: 00:16:34.911 { 00:16:34.911 "code": -5, 00:16:34.911 "message": "Input/output error" 00:16:34.911 } 00:16:34.911 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:34.911 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:34.911 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:34.911 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:34.912 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:34.912 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.912 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.912 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.912 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 4143878 00:16:34.912 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 4143878 ']' 00:16:34.912 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 4143878 00:16:34.912 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:34.912 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:34.912 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4143878 00:16:34.912 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:34.912 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:34.912 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4143878' 00:16:34.912 killing process with pid 4143878 00:16:34.912 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 4143878 00:16:34.912 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 4143878 00:16:35.171 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:35.171 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:35.171 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:35.171 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.171 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=4168378 00:16:35.171 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 4168378 00:16:35.171 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4168378 ']' 00:16:35.171 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.171 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.171 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.171 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.171 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.171 12:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:36.106 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.106 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:36.106 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:36.106 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:36.106 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.106 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.106 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:36.106 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 4168378 00:16:36.106 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4168378 ']' 00:16:36.106 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.106 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.106 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.106 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.106 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.364 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.364 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:36.364 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:36.364 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.364 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.364 null0 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nu8 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.94e ]] 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.94e 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.63F 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.8V9 ]] 00:16:36.623 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8V9 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.utJ 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.rWD ]] 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rWD 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kuc 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.624 12:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.559 nvme0n1 00:16:37.559 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.559 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.559 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.559 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.559 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.559 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.559 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.559 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.559 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.559 { 00:16:37.559 "cntlid": 1, 00:16:37.559 "qid": 0, 00:16:37.559 "state": "enabled", 00:16:37.559 "thread": "nvmf_tgt_poll_group_000", 00:16:37.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:37.559 "listen_address": { 00:16:37.559 "trtype": "RDMA", 00:16:37.559 "adrfam": "IPv4", 00:16:37.559 "traddr": "192.168.100.8", 00:16:37.559 "trsvcid": "4420" 00:16:37.559 }, 00:16:37.559 "peer_address": { 00:16:37.559 "trtype": "RDMA", 00:16:37.559 "adrfam": "IPv4", 00:16:37.559 "traddr": "192.168.100.8", 00:16:37.559 "trsvcid": "58455" 00:16:37.559 }, 00:16:37.559 "auth": { 00:16:37.559 "state": "completed", 00:16:37.559 "digest": "sha512", 00:16:37.559 "dhgroup": "ffdhe8192" 00:16:37.559 } 00:16:37.559 } 00:16:37.559 ]' 00:16:37.559 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.559 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.559 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.559 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:37.559 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.817 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.817 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.817 12:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.817 12:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:16:37.817 12:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:16:38.384 12:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.642 12:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:38.642 12:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.642 12:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.642 12:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.642 12:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:38.642 12:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.642 12:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.642 12:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.642 12:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:38.642 12:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:38.900 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:38.900 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:38.900 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:38.900 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:38.900 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.900 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:38.900 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.900 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:38.900 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.900 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.158 request: 00:16:39.158 { 00:16:39.158 "name": "nvme0", 00:16:39.158 "trtype": "rdma", 00:16:39.158 "traddr": "192.168.100.8", 00:16:39.158 "adrfam": "ipv4", 00:16:39.158 "trsvcid": "4420", 00:16:39.158 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:39.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:39.158 "prchk_reftag": false, 00:16:39.158 "prchk_guard": false, 00:16:39.158 "hdgst": false, 00:16:39.158 "ddgst": false, 00:16:39.158 "dhchap_key": "key3", 00:16:39.158 "allow_unrecognized_csi": false, 00:16:39.158 "method": "bdev_nvme_attach_controller", 00:16:39.158 "req_id": 1 00:16:39.158 } 00:16:39.158 Got JSON-RPC error response 00:16:39.158 response: 00:16:39.158 { 00:16:39.158 "code": -5, 00:16:39.158 "message": "Input/output error" 00:16:39.158 } 00:16:39.158 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:39.158 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:39.158 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:39.158 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:39.158 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:39.158 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:39.158 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:39.158 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:39.158 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:39.158 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:39.158 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:39.158 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:39.158 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.158 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:39.158 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.158 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:39.416 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.416 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.416 request: 00:16:39.416 { 00:16:39.416 "name": "nvme0", 00:16:39.416 "trtype": "rdma", 00:16:39.416 "traddr": "192.168.100.8", 00:16:39.416 "adrfam": "ipv4", 00:16:39.416 "trsvcid": "4420", 00:16:39.416 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:39.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:39.416 "prchk_reftag": false, 00:16:39.416 "prchk_guard": false, 00:16:39.416 "hdgst": false, 00:16:39.416 "ddgst": false, 00:16:39.416 "dhchap_key": "key3", 00:16:39.416 "allow_unrecognized_csi": false, 00:16:39.416 "method": "bdev_nvme_attach_controller", 00:16:39.416 "req_id": 1 00:16:39.416 } 00:16:39.416 Got JSON-RPC error response 00:16:39.416 response: 00:16:39.416 { 00:16:39.416 "code": -5, 00:16:39.416 "message": "Input/output error" 00:16:39.416 } 00:16:39.416 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:39.416 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:39.416 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:39.416 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:39.416 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:39.416 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:39.416 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:39.416 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:39.416 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:39.417 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:39.675 12:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:40.241 request: 00:16:40.241 { 00:16:40.241 "name": "nvme0", 00:16:40.241 "trtype": "rdma", 00:16:40.241 "traddr": "192.168.100.8", 00:16:40.241 "adrfam": "ipv4", 00:16:40.241 "trsvcid": "4420", 00:16:40.241 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:40.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:40.241 "prchk_reftag": false, 00:16:40.241 "prchk_guard": false, 00:16:40.241 "hdgst": false, 00:16:40.241 "ddgst": false, 00:16:40.241 "dhchap_key": "key0", 00:16:40.241 "dhchap_ctrlr_key": "key1", 00:16:40.241 "allow_unrecognized_csi": false, 00:16:40.241 "method": "bdev_nvme_attach_controller", 00:16:40.241 "req_id": 1 00:16:40.241 } 00:16:40.241 Got JSON-RPC error response 00:16:40.241 response: 00:16:40.241 { 00:16:40.241 "code": -5, 00:16:40.241 "message": "Input/output error" 00:16:40.241 } 00:16:40.241 12:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:40.241 12:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:40.241 12:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:40.241 12:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:40.241 12:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:40.241 12:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:40.241 12:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:40.241 nvme0n1 00:16:40.241 12:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:40.241 12:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:40.241 12:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.500 12:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.500 12:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.500 12:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.759 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:16:40.759 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.759 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.759 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.759 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:40.759 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:40.759 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:41.694 nvme0n1 00:16:41.694 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:41.694 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:41.695 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.695 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.695 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:41.695 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.695 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.695 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.695 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:41.695 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.695 12:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:41.952 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.952 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:16:41.953 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: --dhchap-ctrl-secret DHHC-1:03:NjcwNGY5ZTI5MWUzMjc5NjVjMDEzMmFhOTY5MDVmZDhmYjg4NWM3N2I5YTk0ZmY2NDQwZWJkYjJjZjVjMTgxOWk03pU=: 00:16:42.519 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:42.519 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:42.519 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:42.519 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:42.519 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:42.519 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:42.519 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:42.519 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.519 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.777 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:42.777 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:42.778 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:42.778 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:42.778 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.778 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:42.778 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.778 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:42.778 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:42.778 12:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:43.035 request: 00:16:43.035 { 00:16:43.035 "name": "nvme0", 00:16:43.035 "trtype": "rdma", 00:16:43.035 "traddr": "192.168.100.8", 00:16:43.035 "adrfam": "ipv4", 00:16:43.035 "trsvcid": "4420", 00:16:43.035 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:43.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:43.035 "prchk_reftag": false, 00:16:43.035 "prchk_guard": false, 00:16:43.035 "hdgst": false, 00:16:43.035 "ddgst": false, 00:16:43.035 "dhchap_key": "key1", 00:16:43.035 "allow_unrecognized_csi": false, 00:16:43.035 "method": "bdev_nvme_attach_controller", 00:16:43.035 "req_id": 1 00:16:43.035 } 00:16:43.035 Got JSON-RPC error response 00:16:43.035 response: 00:16:43.035 { 00:16:43.035 "code": -5, 00:16:43.035 "message": "Input/output error" 00:16:43.035 } 00:16:43.035 12:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:43.035 12:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:43.035 12:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:43.035 12:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:43.035 12:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:43.035 12:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:43.035 12:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:43.968 nvme0n1 00:16:43.968 12:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:43.968 12:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:43.968 12:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.968 12:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.968 12:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.968 12:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.226 12:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:44.226 12:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.226 12:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.226 12:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.226 12:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:44.226 12:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:44.226 12:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:44.485 nvme0n1 00:16:44.485 12:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:44.485 12:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:44.485 12:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.743 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.743 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.743 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.001 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:45.001 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.001 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.001 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.001 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: '' 2s 00:16:45.001 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:45.001 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:45.001 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: 00:16:45.001 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:45.001 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:45.001 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:45.001 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: ]] 00:16:45.001 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjIzNTJmMzEyNTRhNmQ1M2YyN2YxZmVhMjllMGFlOTZdIHsm: 00:16:45.001 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:45.001 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:45.001 12:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:46.901 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:16:46.901 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:46.901 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:46.901 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:46.901 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:46.901 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:46.901 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:46.901 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key2 00:16:46.901 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.901 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.159 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.159 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: 2s 00:16:47.159 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:47.159 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:47.159 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:16:47.159 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: 00:16:47.159 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:47.159 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:47.159 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:16:47.159 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: ]] 00:16:47.159 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YWRmMjg0NDFiYTEzMDA3ZjI0MDY2NTVlODU0MDM1ZjVjNmNjNjJjMjkxMWE1NTA1gAnzuA==: 00:16:47.159 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:47.159 12:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:49.057 12:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:16:49.057 12:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:49.057 12:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:49.057 12:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:49.057 12:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:49.058 12:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:49.058 12:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:49.058 12:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.315 12:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:49.315 12:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.315 12:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.315 12:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.315 12:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:49.315 12:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:49.315 12:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:49.881 nvme0n1 00:16:49.881 12:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:49.881 12:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.881 12:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.881 12:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.881 12:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:49.881 12:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:50.447 12:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:16:50.447 12:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:16:50.447 12:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.705 12:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.705 12:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:50.705 12:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.705 12:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.705 12:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.705 12:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:16:50.705 12:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:16:50.705 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:16:50.705 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:16:50.705 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.964 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.964 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:50.964 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.964 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.964 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.964 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:50.964 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:50.964 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:50.964 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:50.964 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.964 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:50.964 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.964 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:50.964 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:51.531 request: 00:16:51.531 { 00:16:51.531 "name": "nvme0", 00:16:51.531 "dhchap_key": "key1", 00:16:51.531 "dhchap_ctrlr_key": "key3", 00:16:51.531 "method": "bdev_nvme_set_keys", 00:16:51.531 "req_id": 1 00:16:51.531 } 00:16:51.531 Got JSON-RPC error response 00:16:51.531 response: 00:16:51.531 { 00:16:51.531 "code": -13, 00:16:51.531 "message": "Permission denied" 00:16:51.531 } 00:16:51.531 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:51.531 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:51.531 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:51.531 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:51.531 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:51.531 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.531 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:51.531 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:16:51.531 12:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:16:52.909 12:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:52.909 12:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.909 12:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:52.909 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:16:52.909 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:52.909 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.909 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.909 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.909 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:52.909 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:52.909 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:53.477 nvme0n1 00:16:53.477 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:53.477 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.477 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.477 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.477 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:53.477 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:53.477 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:53.477 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:53.477 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.477 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:53.477 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.477 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:53.477 12:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:54.044 request: 00:16:54.044 { 00:16:54.044 "name": "nvme0", 00:16:54.044 "dhchap_key": "key2", 00:16:54.044 "dhchap_ctrlr_key": "key0", 00:16:54.044 "method": "bdev_nvme_set_keys", 00:16:54.044 "req_id": 1 00:16:54.044 } 00:16:54.044 Got JSON-RPC error response 00:16:54.044 response: 00:16:54.044 { 00:16:54.044 "code": -13, 00:16:54.044 "message": "Permission denied" 00:16:54.044 } 00:16:54.044 12:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:54.044 12:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:54.044 12:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:54.044 12:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:54.044 12:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:54.044 12:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:54.044 12:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.303 12:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:16:54.303 12:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:16:55.237 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:55.237 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:55.237 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.495 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:16:55.495 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:16:55.495 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:16:55.495 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 4143915 00:16:55.495 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 4143915 ']' 00:16:55.495 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 4143915 00:16:55.495 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:55.495 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.495 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4143915 00:16:55.495 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:55.495 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:55.496 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4143915' 00:16:55.496 killing process with pid 4143915 00:16:55.496 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 4143915 00:16:55.496 12:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 4143915 00:16:55.754 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:55.754 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:55.754 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:16:55.754 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:55.755 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:55.755 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:16:55.755 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:55.755 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:55.755 rmmod nvme_rdma 00:16:55.755 rmmod nvme_fabrics 00:16:55.755 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:55.755 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:16:55.755 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:16:55.755 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 4168378 ']' 00:16:55.755 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 4168378 00:16:55.755 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 4168378 ']' 00:16:55.755 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 4168378 00:16:55.755 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:55.755 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.755 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4168378 00:16:56.013 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:56.013 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:56.013 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4168378' 00:16:56.013 killing process with pid 4168378 00:16:56.013 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 4168378 00:16:56.013 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 4168378 00:16:56.013 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:56.013 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:56.013 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.nu8 /tmp/spdk.key-sha256.63F /tmp/spdk.key-sha384.utJ /tmp/spdk.key-sha512.kuc /tmp/spdk.key-sha512.94e /tmp/spdk.key-sha384.8V9 /tmp/spdk.key-sha256.rWD '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:16:56.013 00:16:56.013 real 2m44.042s 00:16:56.013 user 6m13.230s 00:16:56.013 sys 0m25.306s 00:16:56.013 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.013 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.013 ************************************ 00:16:56.013 END TEST nvmf_auth_target 00:16:56.013 ************************************ 00:16:56.013 12:55:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:16:56.014 12:55:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:16:56.014 12:55:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:16:56.014 12:55:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:16:56.014 12:55:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:16:56.014 12:55:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:16:56.014 12:55:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:56.014 12:55:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.014 12:55:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:56.272 ************************************ 00:16:56.272 START TEST nvmf_srq_overwhelm 00:16:56.272 ************************************ 00:16:56.272 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:16:56.272 * Looking for test storage... 00:16:56.272 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:56.272 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:56.272 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lcov --version 00:16:56.272 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:56.272 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:56.272 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.272 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.272 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.272 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.272 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.272 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.272 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.272 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:56.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.273 --rc genhtml_branch_coverage=1 00:16:56.273 --rc genhtml_function_coverage=1 00:16:56.273 --rc genhtml_legend=1 00:16:56.273 --rc geninfo_all_blocks=1 00:16:56.273 --rc geninfo_unexecuted_blocks=1 00:16:56.273 00:16:56.273 ' 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:56.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.273 --rc genhtml_branch_coverage=1 00:16:56.273 --rc genhtml_function_coverage=1 00:16:56.273 --rc genhtml_legend=1 00:16:56.273 --rc geninfo_all_blocks=1 00:16:56.273 --rc geninfo_unexecuted_blocks=1 00:16:56.273 00:16:56.273 ' 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:56.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.273 --rc genhtml_branch_coverage=1 00:16:56.273 --rc genhtml_function_coverage=1 00:16:56.273 --rc genhtml_legend=1 00:16:56.273 --rc geninfo_all_blocks=1 00:16:56.273 --rc geninfo_unexecuted_blocks=1 00:16:56.273 00:16:56.273 ' 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:56.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.273 --rc genhtml_branch_coverage=1 00:16:56.273 --rc genhtml_function_coverage=1 00:16:56.273 --rc genhtml_legend=1 00:16:56.273 --rc geninfo_all_blocks=1 00:16:56.273 --rc geninfo_unexecuted_blocks=1 00:16:56.273 00:16:56.273 ' 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:56.273 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.531 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.531 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.531 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:56.532 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:16:56.532 12:55:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:06.511 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:06.511 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:06.512 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:06.512 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:06.512 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:06.512 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:06.512 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:06.512 altname enp217s0f0np0 00:17:06.512 altname ens818f0np0 00:17:06.512 inet 192.168.100.8/24 scope global mlx_0_0 00:17:06.512 valid_lft forever preferred_lft forever 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:06.512 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:06.512 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:06.512 altname enp217s0f1np1 00:17:06.512 altname ens818f1np1 00:17:06.512 inet 192.168.100.9/24 scope global mlx_0_1 00:17:06.512 valid_lft forever preferred_lft forever 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:06.512 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:06.513 192.168.100.9' 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:06.513 192.168.100.9' 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:06.513 192.168.100.9' 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=4176150 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 4176150 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 4176150 ']' 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.513 12:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:06.513 [2024-11-27 12:55:31.407671] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:17:06.513 [2024-11-27 12:55:31.407730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.513 [2024-11-27 12:55:31.498765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.513 [2024-11-27 12:55:31.541289] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.513 [2024-11-27 12:55:31.541328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.513 [2024-11-27 12:55:31.541338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.513 [2024-11-27 12:55:31.541346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.513 [2024-11-27 12:55:31.541353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.513 [2024-11-27 12:55:31.543053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.513 [2024-11-27 12:55:31.543146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.513 [2024-11-27 12:55:31.543242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.513 [2024-11-27 12:55:31.543245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:06.513 [2024-11-27 12:55:32.325832] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22c0df0/0x22c52e0) succeed. 00:17:06.513 [2024-11-27 12:55:32.335186] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22c2480/0x2306980) succeed. 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:06.513 Malloc0 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.513 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:06.514 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.514 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:06.514 [2024-11-27 12:55:32.436060] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:06.514 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.514 12:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:17:07.082 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:17:07.082 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:17:07.082 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:07.082 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:07.082 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:07.082 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:07.082 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:17:07.082 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:07.082 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:07.082 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.082 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:07.082 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.082 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:07.082 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.082 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:07.341 Malloc1 00:17:07.341 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.341 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:07.341 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.341 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:07.341 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.341 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:07.341 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.341 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:07.341 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.341 12:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:08.279 Malloc2 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.279 12:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:09.280 Malloc3 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.280 12:55:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:17:10.283 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:17:10.283 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:17:10.283 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:10.284 Malloc4 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.284 12:55:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:11.663 Malloc5 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.663 12:55:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:17:12.601 12:55:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:17:12.601 12:55:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:17:12.601 12:55:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:12.601 12:55:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:17:12.601 12:55:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:17:12.601 12:55:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:12.601 12:55:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:17:12.601 12:55:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:17:12.601 [global] 00:17:12.601 thread=1 00:17:12.601 invalidate=1 00:17:12.601 rw=read 00:17:12.601 time_based=1 00:17:12.601 runtime=10 00:17:12.601 ioengine=libaio 00:17:12.601 direct=1 00:17:12.601 bs=1048576 00:17:12.601 iodepth=128 00:17:12.601 norandommap=1 00:17:12.601 numjobs=13 00:17:12.601 00:17:12.601 [job0] 00:17:12.601 filename=/dev/nvme0n1 00:17:12.601 [job1] 00:17:12.601 filename=/dev/nvme1n1 00:17:12.601 [job2] 00:17:12.602 filename=/dev/nvme2n1 00:17:12.602 [job3] 00:17:12.602 filename=/dev/nvme3n1 00:17:12.602 [job4] 00:17:12.602 filename=/dev/nvme4n1 00:17:12.602 [job5] 00:17:12.602 filename=/dev/nvme5n1 00:17:12.602 Could not set queue depth (nvme0n1) 00:17:12.602 Could not set queue depth (nvme1n1) 00:17:12.602 Could not set queue depth (nvme2n1) 00:17:12.602 Could not set queue depth (nvme3n1) 00:17:12.602 Could not set queue depth (nvme4n1) 00:17:12.602 Could not set queue depth (nvme5n1) 00:17:12.860 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:12.860 ... 00:17:12.860 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:12.860 ... 00:17:12.860 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:12.860 ... 00:17:12.860 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:12.860 ... 00:17:12.860 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:12.860 ... 00:17:12.860 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:12.860 ... 00:17:12.860 fio-3.35 00:17:12.860 Starting 78 threads 00:17:27.754 00:17:27.755 job0: (groupid=0, jobs=1): err= 0: pid=4177761: Wed Nov 27 12:55:51 2024 00:17:27.755 read: IOPS=32, BW=33.0MiB/s (34.6MB/s)(334MiB/10133msec) 00:17:27.755 slat (usec): min=60, max=2118.3k, avg=30012.20, stdev=197454.28 00:17:27.755 clat (msec): min=106, max=7414, avg=3672.35, stdev=2573.10 00:17:27.755 lat (msec): min=1083, max=7417, avg=3702.37, stdev=2567.80 00:17:27.755 clat percentiles (msec): 00:17:27.755 | 1.00th=[ 1116], 5.00th=[ 1217], 10.00th=[ 1318], 20.00th=[ 1502], 00:17:27.755 | 30.00th=[ 1737], 40.00th=[ 1871], 50.00th=[ 1938], 60.00th=[ 2056], 00:17:27.755 | 70.00th=[ 6678], 80.00th=[ 6946], 90.00th=[ 7215], 95.00th=[ 7349], 00:17:27.755 | 99.00th=[ 7416], 99.50th=[ 7416], 99.90th=[ 7416], 99.95th=[ 7416], 00:17:27.755 | 99.99th=[ 7416] 00:17:27.755 bw ( KiB/s): min= 4096, max=145408, per=1.58%, avg=52736.00, stdev=49658.34, samples=8 00:17:27.755 iops : min= 4, max= 142, avg=51.50, stdev=48.49, samples=8 00:17:27.755 lat (msec) : 250=0.30%, 2000=55.09%, >=2000=44.61% 00:17:27.755 cpu : usr=0.00%, sys=1.64%, ctx=789, majf=0, minf=32769 00:17:27.755 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.6%, >=64=81.1% 00:17:27.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.755 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:17:27.755 issued rwts: total=334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.755 job0: (groupid=0, jobs=1): err= 0: pid=4177762: Wed Nov 27 12:55:51 2024 00:17:27.755 read: IOPS=51, BW=52.0MiB/s (54.5MB/s)(623MiB/11992msec) 00:17:27.755 slat (usec): min=31, max=2165.5k, avg=16070.20, stdev=144885.49 00:17:27.755 clat (msec): min=519, max=8911, avg=2308.87, stdev=2976.76 00:17:27.755 lat (msec): min=523, max=8915, avg=2324.94, stdev=2986.28 00:17:27.755 clat percentiles (msec): 00:17:27.755 | 1.00th=[ 523], 5.00th=[ 527], 10.00th=[ 531], 20.00th=[ 558], 00:17:27.755 | 30.00th=[ 592], 40.00th=[ 634], 50.00th=[ 651], 60.00th=[ 877], 00:17:27.755 | 70.00th=[ 1150], 80.00th=[ 4799], 90.00th=[ 8658], 95.00th=[ 8792], 00:17:27.755 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:17:27.755 | 99.99th=[ 8926] 00:17:27.755 bw ( KiB/s): min=12263, max=243225, per=3.78%, avg=126088.38, stdev=101027.73, samples=8 00:17:27.755 iops : min= 11, max= 237, avg=122.75, stdev=98.80, samples=8 00:17:27.755 lat (msec) : 750=52.65%, 1000=11.24%, 2000=10.11%, >=2000=26.00% 00:17:27.755 cpu : usr=0.01%, sys=1.12%, ctx=733, majf=0, minf=32769 00:17:27.755 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=89.9% 00:17:27.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.755 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:27.755 issued rwts: total=623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.755 job0: (groupid=0, jobs=1): err= 0: pid=4177763: Wed Nov 27 12:55:51 2024 00:17:27.755 read: IOPS=141, BW=141MiB/s (148MB/s)(1427MiB/10086msec) 00:17:27.755 slat (usec): min=41, max=2059.7k, avg=7019.44, stdev=63502.15 00:17:27.755 clat (msec): min=58, max=4010, avg=621.77, stdev=424.72 00:17:27.755 lat (msec): min=88, max=4012, avg=628.79, stdev=434.17 00:17:27.755 clat percentiles (msec): 00:17:27.755 | 1.00th=[ 131], 5.00th=[ 363], 10.00th=[ 388], 20.00th=[ 401], 00:17:27.755 | 30.00th=[ 481], 40.00th=[ 592], 50.00th=[ 600], 60.00th=[ 625], 00:17:27.755 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 751], 95.00th=[ 793], 00:17:27.755 | 99.00th=[ 3977], 99.50th=[ 4010], 99.90th=[ 4010], 99.95th=[ 4010], 00:17:27.755 | 99.99th=[ 4010] 00:17:27.755 bw ( KiB/s): min=10240, max=344064, per=5.96%, avg=198800.08, stdev=78814.47, samples=12 00:17:27.755 iops : min= 10, max= 336, avg=194.08, stdev=77.00, samples=12 00:17:27.755 lat (msec) : 100=0.56%, 250=1.75%, 500=29.71%, 750=57.39%, 1000=8.90% 00:17:27.755 lat (msec) : >=2000=1.68% 00:17:27.755 cpu : usr=0.18%, sys=2.51%, ctx=1261, majf=0, minf=32769 00:17:27.755 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:17:27.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.755 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.755 issued rwts: total=1427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.755 job0: (groupid=0, jobs=1): err= 0: pid=4177764: Wed Nov 27 12:55:51 2024 00:17:27.755 read: IOPS=17, BW=17.3MiB/s (18.1MB/s)(175MiB/10115msec) 00:17:27.755 slat (usec): min=100, max=2182.2k, avg=57183.64, stdev=297161.76 00:17:27.755 clat (msec): min=106, max=6446, avg=5368.68, stdev=1080.86 00:17:27.755 lat (msec): min=1122, max=6454, avg=5425.87, stdev=953.29 00:17:27.755 clat percentiles (msec): 00:17:27.755 | 1.00th=[ 1099], 5.00th=[ 3205], 10.00th=[ 4329], 20.00th=[ 4463], 00:17:27.755 | 30.00th=[ 5470], 40.00th=[ 5537], 50.00th=[ 5738], 60.00th=[ 5873], 00:17:27.755 | 70.00th=[ 6007], 80.00th=[ 6141], 90.00th=[ 6275], 95.00th=[ 6409], 00:17:27.755 | 99.00th=[ 6409], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:17:27.755 | 99.99th=[ 6477] 00:17:27.755 bw ( KiB/s): min= 2048, max=49152, per=0.59%, avg=19660.80, stdev=24214.95, samples=5 00:17:27.755 iops : min= 2, max= 48, avg=19.20, stdev=23.65, samples=5 00:17:27.755 lat (msec) : 250=0.57%, 2000=1.71%, >=2000=97.71% 00:17:27.755 cpu : usr=0.02%, sys=1.39%, ctx=297, majf=0, minf=32769 00:17:27.755 IO depths : 1=0.6%, 2=1.1%, 4=2.3%, 8=4.6%, 16=9.1%, 32=18.3%, >=64=64.0% 00:17:27.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.755 complete : 0=0.0%, 4=98.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.0% 00:17:27.755 issued rwts: total=175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.755 job0: (groupid=0, jobs=1): err= 0: pid=4177765: Wed Nov 27 12:55:51 2024 00:17:27.755 read: IOPS=8, BW=8193KiB/s (8389kB/s)(81.0MiB/10124msec) 00:17:27.755 slat (usec): min=446, max=2141.0k, avg=123966.78, stdev=456755.51 00:17:27.755 clat (msec): min=81, max=10121, avg=8283.31, stdev=2625.90 00:17:27.755 lat (msec): min=135, max=10123, avg=8407.27, stdev=2466.01 00:17:27.755 clat percentiles (msec): 00:17:27.755 | 1.00th=[ 82], 5.00th=[ 2265], 10.00th=[ 4463], 20.00th=[ 8792], 00:17:27.755 | 30.00th=[ 8926], 40.00th=[ 9194], 50.00th=[ 9194], 60.00th=[ 9329], 00:17:27.755 | 70.00th=[ 9597], 80.00th=[ 9731], 90.00th=[10000], 95.00th=[10134], 00:17:27.755 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:27.755 | 99.99th=[10134] 00:17:27.755 lat (msec) : 100=1.23%, 250=3.70%, >=2000=95.06% 00:17:27.755 cpu : usr=0.00%, sys=0.66%, ctx=273, majf=0, minf=20737 00:17:27.755 IO depths : 1=1.2%, 2=2.5%, 4=4.9%, 8=9.9%, 16=19.8%, 32=39.5%, >=64=22.2% 00:17:27.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.755 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:27.755 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.755 job0: (groupid=0, jobs=1): err= 0: pid=4177766: Wed Nov 27 12:55:51 2024 00:17:27.755 read: IOPS=5, BW=5768KiB/s (5907kB/s)(57.0MiB/10119msec) 00:17:27.755 slat (usec): min=904, max=2175.1k, avg=175562.90, stdev=545997.56 00:17:27.755 clat (msec): min=110, max=10113, avg=8745.27, stdev=2448.38 00:17:27.755 lat (msec): min=120, max=10118, avg=8920.83, stdev=2159.99 00:17:27.755 clat percentiles (msec): 00:17:27.755 | 1.00th=[ 111], 5.00th=[ 150], 10.00th=[ 6544], 20.00th=[ 8926], 00:17:27.755 | 30.00th=[ 9194], 40.00th=[ 9329], 50.00th=[ 9463], 60.00th=[ 9597], 00:17:27.755 | 70.00th=[ 9866], 80.00th=[10000], 90.00th=[10134], 95.00th=[10134], 00:17:27.755 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:27.755 | 99.99th=[10134] 00:17:27.755 lat (msec) : 250=5.26%, >=2000=94.74% 00:17:27.755 cpu : usr=0.01%, sys=0.48%, ctx=282, majf=0, minf=14593 00:17:27.755 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:17:27.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.755 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:27.755 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.755 job0: (groupid=0, jobs=1): err= 0: pid=4177767: Wed Nov 27 12:55:51 2024 00:17:27.755 read: IOPS=5, BW=5656KiB/s (5792kB/s)(56.0MiB/10138msec) 00:17:27.755 slat (usec): min=1037, max=3271.2k, avg=179149.17, stdev=638823.18 00:17:27.755 clat (msec): min=104, max=10136, avg=9342.48, stdev=2215.70 00:17:27.755 lat (msec): min=2220, max=10137, avg=9521.63, stdev=1826.64 00:17:27.755 clat percentiles (msec): 00:17:27.755 | 1.00th=[ 106], 5.00th=[ 2232], 10.00th=[ 6544], 20.00th=[ 9866], 00:17:27.755 | 30.00th=[10000], 40.00th=[10134], 50.00th=[10134], 60.00th=[10134], 00:17:27.755 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:17:27.755 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:27.755 | 99.99th=[10134] 00:17:27.755 lat (msec) : 250=1.79%, >=2000=98.21% 00:17:27.755 cpu : usr=0.00%, sys=0.64%, ctx=108, majf=0, minf=14337 00:17:27.755 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:17:27.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.755 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:27.755 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.755 job0: (groupid=0, jobs=1): err= 0: pid=4177768: Wed Nov 27 12:55:51 2024 00:17:27.755 read: IOPS=36, BW=36.9MiB/s (38.7MB/s)(373MiB/10107msec) 00:17:27.755 slat (usec): min=83, max=2083.0k, avg=26801.18, stdev=194817.64 00:17:27.755 clat (msec): min=106, max=4893, avg=2212.87, stdev=1863.03 00:17:27.755 lat (msec): min=110, max=4897, avg=2239.67, stdev=1865.76 00:17:27.755 clat percentiles (msec): 00:17:27.755 | 1.00th=[ 115], 5.00th=[ 584], 10.00th=[ 592], 20.00th=[ 600], 00:17:27.755 | 30.00th=[ 617], 40.00th=[ 793], 50.00th=[ 1053], 60.00th=[ 2265], 00:17:27.755 | 70.00th=[ 4530], 80.00th=[ 4597], 90.00th=[ 4799], 95.00th=[ 4799], 00:17:27.755 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:17:27.755 | 99.99th=[ 4866] 00:17:27.755 bw ( KiB/s): min=18432, max=221184, per=3.02%, avg=100692.80, stdev=85799.64, samples=5 00:17:27.755 iops : min= 18, max= 216, avg=98.00, stdev=84.01, samples=5 00:17:27.755 lat (msec) : 250=4.02%, 750=33.24%, 1000=10.19%, 2000=11.53%, >=2000=41.02% 00:17:27.755 cpu : usr=0.05%, sys=1.44%, ctx=477, majf=0, minf=32769 00:17:27.755 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.6%, >=64=83.1% 00:17:27.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.755 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:27.755 issued rwts: total=373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.756 job0: (groupid=0, jobs=1): err= 0: pid=4177769: Wed Nov 27 12:55:51 2024 00:17:27.756 read: IOPS=29, BW=29.2MiB/s (30.6MB/s)(295MiB/10111msec) 00:17:27.756 slat (usec): min=58, max=2142.0k, avg=33946.79, stdev=244890.06 00:17:27.756 clat (msec): min=94, max=9174, avg=4218.15, stdev=3985.23 00:17:27.756 lat (msec): min=124, max=9188, avg=4252.10, stdev=3986.64 00:17:27.756 clat percentiles (msec): 00:17:27.756 | 1.00th=[ 460], 5.00th=[ 510], 10.00th=[ 542], 20.00th=[ 651], 00:17:27.756 | 30.00th=[ 684], 40.00th=[ 709], 50.00th=[ 751], 60.00th=[ 6544], 00:17:27.756 | 70.00th=[ 8792], 80.00th=[ 8926], 90.00th=[ 9060], 95.00th=[ 9194], 00:17:27.756 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:17:27.756 | 99.99th=[ 9194] 00:17:27.756 bw ( KiB/s): min= 2048, max=196215, per=1.71%, avg=56937.17, stdev=81964.54, samples=6 00:17:27.756 iops : min= 2, max= 191, avg=55.50, stdev=79.83, samples=6 00:17:27.756 lat (msec) : 100=0.34%, 250=0.34%, 500=3.73%, 750=45.42%, 1000=3.05% 00:17:27.756 lat (msec) : >=2000=47.12% 00:17:27.756 cpu : usr=0.00%, sys=1.43%, ctx=310, majf=0, minf=32769 00:17:27.756 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.6% 00:17:27.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.756 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:27.756 issued rwts: total=295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.756 job0: (groupid=0, jobs=1): err= 0: pid=4177770: Wed Nov 27 12:55:51 2024 00:17:27.756 read: IOPS=7, BW=7426KiB/s (7605kB/s)(87.0MiB/11996msec) 00:17:27.756 slat (usec): min=421, max=2236.6k, avg=115157.77, stdev=443759.72 00:17:27.756 clat (msec): min=1976, max=11974, avg=10749.97, stdev=2012.00 00:17:27.756 lat (msec): min=2098, max=11995, avg=10865.13, stdev=1776.97 00:17:27.756 clat percentiles (msec): 00:17:27.756 | 1.00th=[ 1972], 5.00th=[ 6342], 10.00th=[10671], 20.00th=[10805], 00:17:27.756 | 30.00th=[10939], 40.00th=[11073], 50.00th=[11208], 60.00th=[11342], 00:17:27.756 | 70.00th=[11476], 80.00th=[11610], 90.00th=[11745], 95.00th=[11879], 00:17:27.756 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:27.756 | 99.99th=[12013] 00:17:27.756 lat (msec) : 2000=1.15%, >=2000=98.85% 00:17:27.756 cpu : usr=0.02%, sys=0.45%, ctx=253, majf=0, minf=22273 00:17:27.756 IO depths : 1=1.1%, 2=2.3%, 4=4.6%, 8=9.2%, 16=18.4%, 32=36.8%, >=64=27.6% 00:17:27.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.756 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:27.756 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.756 job0: (groupid=0, jobs=1): err= 0: pid=4177771: Wed Nov 27 12:55:51 2024 00:17:27.756 read: IOPS=90, BW=91.0MiB/s (95.4MB/s)(922MiB/10136msec) 00:17:27.756 slat (usec): min=58, max=2131.5k, avg=10851.11, stdev=119622.92 00:17:27.756 clat (msec): min=121, max=6927, avg=1345.11, stdev=2102.53 00:17:27.756 lat (msec): min=387, max=6929, avg=1355.97, stdev=2108.88 00:17:27.756 clat percentiles (msec): 00:17:27.756 | 1.00th=[ 388], 5.00th=[ 388], 10.00th=[ 393], 20.00th=[ 393], 00:17:27.756 | 30.00th=[ 397], 40.00th=[ 401], 50.00th=[ 405], 60.00th=[ 523], 00:17:27.756 | 70.00th=[ 667], 80.00th=[ 776], 90.00th=[ 6678], 95.00th=[ 6812], 00:17:27.756 | 99.00th=[ 6879], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:17:27.756 | 99.99th=[ 6946] 00:17:27.756 bw ( KiB/s): min= 2048, max=333824, per=4.88%, avg=162816.00, stdev=148159.90, samples=10 00:17:27.756 iops : min= 2, max= 326, avg=159.00, stdev=144.69, samples=10 00:17:27.756 lat (msec) : 250=0.11%, 500=58.24%, 750=20.39%, 1000=6.07%, >=2000=15.18% 00:17:27.756 cpu : usr=0.08%, sys=2.54%, ctx=807, majf=0, minf=32769 00:17:27.756 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:17:27.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.756 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.756 issued rwts: total=922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.756 job0: (groupid=0, jobs=1): err= 0: pid=4177772: Wed Nov 27 12:55:51 2024 00:17:27.756 read: IOPS=71, BW=71.4MiB/s (74.9MB/s)(719MiB/10067msec) 00:17:27.756 slat (usec): min=42, max=2041.9k, avg=13904.57, stdev=128987.18 00:17:27.756 clat (msec): min=66, max=5773, avg=1510.39, stdev=1397.22 00:17:27.756 lat (msec): min=69, max=5862, avg=1524.29, stdev=1403.84 00:17:27.756 clat percentiles (msec): 00:17:27.756 | 1.00th=[ 89], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 271], 00:17:27.756 | 30.00th=[ 506], 40.00th=[ 510], 50.00th=[ 634], 60.00th=[ 1036], 00:17:27.756 | 70.00th=[ 2500], 80.00th=[ 3037], 90.00th=[ 3910], 95.00th=[ 3977], 00:17:27.756 | 99.00th=[ 4010], 99.50th=[ 4597], 99.90th=[ 5805], 99.95th=[ 5805], 00:17:27.756 | 99.99th=[ 5805] 00:17:27.756 bw ( KiB/s): min= 2048, max=370688, per=4.40%, avg=146624.62, stdev=119653.48, samples=8 00:17:27.756 iops : min= 2, max= 362, avg=143.12, stdev=116.78, samples=8 00:17:27.756 lat (msec) : 100=1.67%, 250=0.97%, 500=24.90%, 750=25.59%, 1000=6.26% 00:17:27.756 lat (msec) : 2000=5.29%, >=2000=35.33% 00:17:27.756 cpu : usr=0.04%, sys=1.19%, ctx=794, majf=0, minf=32769 00:17:27.756 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:17:27.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.756 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:27.756 issued rwts: total=719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.756 job0: (groupid=0, jobs=1): err= 0: pid=4177773: Wed Nov 27 12:55:51 2024 00:17:27.756 read: IOPS=251, BW=251MiB/s (264MB/s)(2553MiB/10156msec) 00:17:27.756 slat (usec): min=40, max=4222.3k, avg=3941.46, stdev=93316.62 00:17:27.756 clat (msec): min=87, max=8785, avg=486.40, stdev=1142.93 00:17:27.756 lat (msec): min=103, max=8821, avg=490.34, stdev=1151.80 00:17:27.756 clat percentiles (msec): 00:17:27.756 | 1.00th=[ 103], 5.00th=[ 104], 10.00th=[ 104], 20.00th=[ 104], 00:17:27.756 | 30.00th=[ 104], 40.00th=[ 105], 50.00th=[ 105], 60.00th=[ 105], 00:17:27.756 | 70.00th=[ 105], 80.00th=[ 106], 90.00th=[ 919], 95.00th=[ 3171], 00:17:27.756 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 6611], 99.95th=[ 8658], 00:17:27.756 | 99.99th=[ 8792] 00:17:27.756 bw ( KiB/s): min= 4039, max=1214464, per=18.62%, avg=620792.88, stdev=570423.62, samples=8 00:17:27.756 iops : min= 3, max= 1186, avg=606.12, stdev=557.20, samples=8 00:17:27.756 lat (msec) : 100=0.04%, 250=87.27%, 750=1.84%, 1000=0.94%, 2000=0.04% 00:17:27.756 lat (msec) : >=2000=9.87% 00:17:27.756 cpu : usr=0.05%, sys=2.42%, ctx=2530, majf=0, minf=32769 00:17:27.756 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:17:27.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.756 issued rwts: total=2553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.756 job1: (groupid=0, jobs=1): err= 0: pid=4177774: Wed Nov 27 12:55:51 2024 00:17:27.756 read: IOPS=4, BW=4462KiB/s (4569kB/s)(44.0MiB/10098msec) 00:17:27.756 slat (usec): min=933, max=2087.6k, avg=227333.84, stdev=616945.56 00:17:27.756 clat (msec): min=94, max=10096, avg=5933.05, stdev=3610.36 00:17:27.756 lat (msec): min=103, max=10097, avg=6160.39, stdev=3548.57 00:17:27.756 clat percentiles (msec): 00:17:27.756 | 1.00th=[ 95], 5.00th=[ 176], 10.00th=[ 199], 20.00th=[ 2299], 00:17:27.756 | 30.00th=[ 2333], 40.00th=[ 4463], 50.00th=[ 6611], 60.00th=[ 8792], 00:17:27.756 | 70.00th=[ 8792], 80.00th=[10000], 90.00th=[10134], 95.00th=[10134], 00:17:27.756 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:27.756 | 99.99th=[10134] 00:17:27.756 lat (msec) : 100=2.27%, 250=9.09%, >=2000=88.64% 00:17:27.756 cpu : usr=0.01%, sys=0.37%, ctx=81, majf=0, minf=11265 00:17:27.756 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:17:27.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.756 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:27.756 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.756 job1: (groupid=0, jobs=1): err= 0: pid=4177775: Wed Nov 27 12:55:51 2024 00:17:27.756 read: IOPS=33, BW=33.3MiB/s (34.9MB/s)(336MiB/10103msec) 00:17:27.756 slat (usec): min=489, max=2162.1k, avg=29757.87, stdev=199780.96 00:17:27.756 clat (msec): min=102, max=6547, avg=2648.96, stdev=1762.05 00:17:27.756 lat (msec): min=772, max=8609, avg=2678.72, stdev=1777.83 00:17:27.756 clat percentiles (msec): 00:17:27.756 | 1.00th=[ 768], 5.00th=[ 818], 10.00th=[ 869], 20.00th=[ 978], 00:17:27.756 | 30.00th=[ 1062], 40.00th=[ 1099], 50.00th=[ 2005], 60.00th=[ 4044], 00:17:27.756 | 70.00th=[ 4463], 80.00th=[ 4665], 90.00th=[ 4866], 95.00th=[ 5000], 00:17:27.756 | 99.00th=[ 6477], 99.50th=[ 6544], 99.90th=[ 6544], 99.95th=[ 6544], 00:17:27.756 | 99.99th=[ 6544] 00:17:27.756 bw ( KiB/s): min= 2048, max=157696, per=1.83%, avg=61147.43, stdev=54563.33, samples=7 00:17:27.756 iops : min= 2, max= 154, avg=59.71, stdev=53.28, samples=7 00:17:27.756 lat (msec) : 250=0.30%, 1000=20.83%, 2000=28.27%, >=2000=50.60% 00:17:27.756 cpu : usr=0.01%, sys=1.01%, ctx=1236, majf=0, minf=32769 00:17:27.756 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.5%, >=64=81.2% 00:17:27.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.756 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:17:27.756 issued rwts: total=336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.756 job1: (groupid=0, jobs=1): err= 0: pid=4177776: Wed Nov 27 12:55:51 2024 00:17:27.756 read: IOPS=9, BW=9562KiB/s (9791kB/s)(95.0MiB/10174msec) 00:17:27.756 slat (usec): min=679, max=2105.7k, avg=105822.38, stdev=435853.74 00:17:27.756 clat (msec): min=119, max=10172, avg=8132.37, stdev=2951.03 00:17:27.756 lat (msec): min=2211, max=10173, avg=8238.19, stdev=2838.76 00:17:27.756 clat percentiles (msec): 00:17:27.756 | 1.00th=[ 121], 5.00th=[ 2232], 10.00th=[ 2265], 20.00th=[ 4396], 00:17:27.756 | 30.00th=[ 6544], 40.00th=[ 9866], 50.00th=[10000], 60.00th=[10134], 00:17:27.756 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:17:27.756 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:27.756 | 99.99th=[10134] 00:17:27.756 lat (msec) : 250=1.05%, >=2000=98.95% 00:17:27.757 cpu : usr=0.00%, sys=1.05%, ctx=103, majf=0, minf=24321 00:17:27.757 IO depths : 1=1.1%, 2=2.1%, 4=4.2%, 8=8.4%, 16=16.8%, 32=33.7%, >=64=33.7% 00:17:27.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.757 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:27.757 issued rwts: total=95,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.757 job1: (groupid=0, jobs=1): err= 0: pid=4177777: Wed Nov 27 12:55:51 2024 00:17:27.757 read: IOPS=45, BW=45.9MiB/s (48.1MB/s)(462MiB/10069msec) 00:17:27.757 slat (usec): min=577, max=2097.8k, avg=21644.78, stdev=161062.58 00:17:27.757 clat (msec): min=66, max=8750, avg=2530.53, stdev=2650.11 00:17:27.757 lat (msec): min=87, max=8999, avg=2552.18, stdev=2655.32 00:17:27.757 clat percentiles (msec): 00:17:27.757 | 1.00th=[ 144], 5.00th=[ 659], 10.00th=[ 693], 20.00th=[ 709], 00:17:27.757 | 30.00th=[ 802], 40.00th=[ 827], 50.00th=[ 961], 60.00th=[ 1183], 00:17:27.757 | 70.00th=[ 1502], 80.00th=[ 6678], 90.00th=[ 6946], 95.00th=[ 7080], 00:17:27.757 | 99.00th=[ 7215], 99.50th=[ 7215], 99.90th=[ 8792], 99.95th=[ 8792], 00:17:27.757 | 99.99th=[ 8792] 00:17:27.757 bw ( KiB/s): min= 4096, max=179864, per=2.89%, avg=96455.00, stdev=73781.88, samples=7 00:17:27.757 iops : min= 4, max= 175, avg=94.00, stdev=71.85, samples=7 00:17:27.757 lat (msec) : 100=0.43%, 250=0.65%, 750=24.46%, 1000=25.76%, 2000=20.13% 00:17:27.757 lat (msec) : >=2000=28.57% 00:17:27.757 cpu : usr=0.01%, sys=1.30%, ctx=1093, majf=0, minf=32769 00:17:27.757 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=6.9%, >=64=86.4% 00:17:27.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.757 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:27.757 issued rwts: total=462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.757 job1: (groupid=0, jobs=1): err= 0: pid=4177778: Wed Nov 27 12:55:51 2024 00:17:27.757 read: IOPS=31, BW=31.6MiB/s (33.1MB/s)(319MiB/10097msec) 00:17:27.757 slat (usec): min=459, max=2096.3k, avg=31391.80, stdev=198975.34 00:17:27.757 clat (msec): min=81, max=7828, avg=3731.17, stdev=2785.19 00:17:27.757 lat (msec): min=1233, max=7830, avg=3762.56, stdev=2781.62 00:17:27.757 clat percentiles (msec): 00:17:27.757 | 1.00th=[ 1234], 5.00th=[ 1284], 10.00th=[ 1334], 20.00th=[ 1401], 00:17:27.757 | 30.00th=[ 1485], 40.00th=[ 1536], 50.00th=[ 1552], 60.00th=[ 3574], 00:17:27.757 | 70.00th=[ 6879], 80.00th=[ 7215], 90.00th=[ 7483], 95.00th=[ 7684], 00:17:27.757 | 99.00th=[ 7752], 99.50th=[ 7819], 99.90th=[ 7819], 99.95th=[ 7819], 00:17:27.757 | 99.99th=[ 7819] 00:17:27.757 bw ( KiB/s): min= 2048, max=102605, per=1.47%, avg=48963.00, stdev=47181.86, samples=8 00:17:27.757 iops : min= 2, max= 100, avg=47.75, stdev=46.01, samples=8 00:17:27.757 lat (msec) : 100=0.31%, 2000=56.74%, >=2000=42.95% 00:17:27.757 cpu : usr=0.00%, sys=1.26%, ctx=769, majf=0, minf=32769 00:17:27.757 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.0%, 32=10.0%, >=64=80.3% 00:17:27.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.757 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:17:27.757 issued rwts: total=319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.757 job1: (groupid=0, jobs=1): err= 0: pid=4177779: Wed Nov 27 12:55:51 2024 00:17:27.757 read: IOPS=7, BW=7983KiB/s (8174kB/s)(79.0MiB/10134msec) 00:17:27.757 slat (usec): min=658, max=2100.7k, avg=126961.84, stdev=473743.88 00:17:27.757 clat (msec): min=103, max=10131, avg=8668.02, stdev=2506.10 00:17:27.757 lat (msec): min=2190, max=10133, avg=8794.98, stdev=2313.32 00:17:27.757 clat percentiles (msec): 00:17:27.757 | 1.00th=[ 104], 5.00th=[ 2198], 10.00th=[ 4396], 20.00th=[ 8658], 00:17:27.757 | 30.00th=[ 8658], 40.00th=[10000], 50.00th=[10000], 60.00th=[10000], 00:17:27.757 | 70.00th=[10000], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:17:27.757 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:27.757 | 99.99th=[10134] 00:17:27.757 lat (msec) : 250=1.27%, >=2000=98.73% 00:17:27.757 cpu : usr=0.00%, sys=0.85%, ctx=108, majf=0, minf=20225 00:17:27.757 IO depths : 1=1.3%, 2=2.5%, 4=5.1%, 8=10.1%, 16=20.3%, 32=40.5%, >=64=20.3% 00:17:27.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.757 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:27.757 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.757 job1: (groupid=0, jobs=1): err= 0: pid=4177780: Wed Nov 27 12:55:51 2024 00:17:27.757 read: IOPS=86, BW=86.3MiB/s (90.5MB/s)(872MiB/10104msec) 00:17:27.757 slat (usec): min=45, max=2126.6k, avg=11491.26, stdev=101568.65 00:17:27.757 clat (msec): min=78, max=5235, avg=1370.36, stdev=1461.34 00:17:27.757 lat (msec): min=138, max=5239, avg=1381.86, stdev=1464.39 00:17:27.757 clat percentiles (msec): 00:17:27.757 | 1.00th=[ 527], 5.00th=[ 634], 10.00th=[ 642], 20.00th=[ 667], 00:17:27.757 | 30.00th=[ 709], 40.00th=[ 743], 50.00th=[ 776], 60.00th=[ 844], 00:17:27.757 | 70.00th=[ 894], 80.00th=[ 944], 90.00th=[ 4799], 95.00th=[ 5000], 00:17:27.757 | 99.00th=[ 5201], 99.50th=[ 5201], 99.90th=[ 5269], 99.95th=[ 5269], 00:17:27.757 | 99.99th=[ 5269] 00:17:27.757 bw ( KiB/s): min= 6144, max=196608, per=3.81%, avg=126947.33, stdev=73214.31, samples=12 00:17:27.757 iops : min= 6, max= 192, avg=123.83, stdev=71.61, samples=12 00:17:27.757 lat (msec) : 100=0.11%, 250=0.34%, 750=45.18%, 1000=38.99%, 2000=0.46% 00:17:27.757 lat (msec) : >=2000=14.91% 00:17:27.757 cpu : usr=0.04%, sys=1.59%, ctx=2106, majf=0, minf=32769 00:17:27.757 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.7%, >=64=92.8% 00:17:27.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.757 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.757 issued rwts: total=872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.757 job1: (groupid=0, jobs=1): err= 0: pid=4177781: Wed Nov 27 12:55:51 2024 00:17:27.757 read: IOPS=18, BW=18.4MiB/s (19.3MB/s)(186MiB/10123msec) 00:17:27.757 slat (usec): min=89, max=2098.0k, avg=53772.23, stdev=294759.31 00:17:27.757 clat (msec): min=119, max=8208, avg=4055.20, stdev=2754.71 00:17:27.757 lat (msec): min=125, max=8212, avg=4108.97, stdev=2763.41 00:17:27.757 clat percentiles (msec): 00:17:27.757 | 1.00th=[ 126], 5.00th=[ 1687], 10.00th=[ 1737], 20.00th=[ 1854], 00:17:27.757 | 30.00th=[ 1955], 40.00th=[ 2072], 50.00th=[ 2165], 60.00th=[ 4396], 00:17:27.757 | 70.00th=[ 6544], 80.00th=[ 8154], 90.00th=[ 8221], 95.00th=[ 8221], 00:17:27.757 | 99.00th=[ 8221], 99.50th=[ 8221], 99.90th=[ 8221], 99.95th=[ 8221], 00:17:27.757 | 99.99th=[ 8221] 00:17:27.757 bw ( KiB/s): min= 4096, max=116736, per=1.81%, avg=60416.00, stdev=79648.51, samples=2 00:17:27.757 iops : min= 4, max= 114, avg=59.00, stdev=77.78, samples=2 00:17:27.757 lat (msec) : 250=1.08%, 2000=33.87%, >=2000=65.05% 00:17:27.757 cpu : usr=0.02%, sys=1.41%, ctx=178, majf=0, minf=32769 00:17:27.757 IO depths : 1=0.5%, 2=1.1%, 4=2.2%, 8=4.3%, 16=8.6%, 32=17.2%, >=64=66.1% 00:17:27.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.757 complete : 0=0.0%, 4=98.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.7% 00:17:27.757 issued rwts: total=186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.757 job1: (groupid=0, jobs=1): err= 0: pid=4177782: Wed Nov 27 12:55:51 2024 00:17:27.757 read: IOPS=7, BW=7982KiB/s (8173kB/s)(79.0MiB/10135msec) 00:17:27.757 slat (usec): min=586, max=2093.2k, avg=126919.46, stdev=470905.82 00:17:27.757 clat (msec): min=107, max=10133, avg=8557.57, stdev=2695.73 00:17:27.757 lat (msec): min=2180, max=10134, avg=8684.49, stdev=2523.33 00:17:27.757 clat percentiles (msec): 00:17:27.757 | 1.00th=[ 108], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 6477], 00:17:27.757 | 30.00th=[ 9731], 40.00th=[ 9866], 50.00th=[ 9866], 60.00th=[10000], 00:17:27.757 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:17:27.757 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:27.757 | 99.99th=[10134] 00:17:27.757 lat (msec) : 250=1.27%, >=2000=98.73% 00:17:27.757 cpu : usr=0.00%, sys=0.81%, ctx=125, majf=0, minf=20225 00:17:27.757 IO depths : 1=1.3%, 2=2.5%, 4=5.1%, 8=10.1%, 16=20.3%, 32=40.5%, >=64=20.3% 00:17:27.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.757 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:27.757 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.757 job1: (groupid=0, jobs=1): err= 0: pid=4177783: Wed Nov 27 12:55:51 2024 00:17:27.757 read: IOPS=79, BW=79.7MiB/s (83.5MB/s)(805MiB/10104msec) 00:17:27.757 slat (usec): min=44, max=2075.2k, avg=12442.09, stdev=109972.78 00:17:27.757 clat (msec): min=82, max=5014, avg=1509.99, stdev=1417.96 00:17:27.757 lat (msec): min=106, max=5018, avg=1522.44, stdev=1421.57 00:17:27.757 clat percentiles (msec): 00:17:27.757 | 1.00th=[ 600], 5.00th=[ 634], 10.00th=[ 676], 20.00th=[ 684], 00:17:27.757 | 30.00th=[ 693], 40.00th=[ 709], 50.00th=[ 760], 60.00th=[ 877], 00:17:27.757 | 70.00th=[ 1401], 80.00th=[ 1770], 90.00th=[ 4799], 95.00th=[ 4933], 00:17:27.757 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:17:27.757 | 99.99th=[ 5000] 00:17:27.757 bw ( KiB/s): min= 2048, max=203158, per=4.16%, avg=138825.30, stdev=69915.86, samples=10 00:17:27.757 iops : min= 2, max= 198, avg=135.40, stdev=68.17, samples=10 00:17:27.757 lat (msec) : 100=0.12%, 250=0.12%, 750=48.82%, 1000=13.54%, 2000=19.25% 00:17:27.757 lat (msec) : >=2000=18.14% 00:17:27.757 cpu : usr=0.04%, sys=2.10%, ctx=864, majf=0, minf=32769 00:17:27.757 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:17:27.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.757 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.757 issued rwts: total=805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.757 job1: (groupid=0, jobs=1): err= 0: pid=4177784: Wed Nov 27 12:55:51 2024 00:17:27.757 read: IOPS=26, BW=26.9MiB/s (28.2MB/s)(271MiB/10068msec) 00:17:27.757 slat (usec): min=60, max=2096.3k, avg=37043.22, stdev=216225.35 00:17:27.757 clat (msec): min=26, max=7835, avg=4150.48, stdev=2801.95 00:17:27.757 lat (msec): min=67, max=7841, avg=4187.52, stdev=2794.25 00:17:27.757 clat percentiles (msec): 00:17:27.757 | 1.00th=[ 70], 5.00th=[ 103], 10.00th=[ 1620], 20.00th=[ 1720], 00:17:27.757 | 30.00th=[ 1737], 40.00th=[ 1804], 50.00th=[ 2056], 60.00th=[ 6544], 00:17:27.757 | 70.00th=[ 6812], 80.00th=[ 7215], 90.00th=[ 7617], 95.00th=[ 7752], 00:17:27.758 | 99.00th=[ 7819], 99.50th=[ 7819], 99.90th=[ 7819], 99.95th=[ 7819], 00:17:27.758 | 99.99th=[ 7819] 00:17:27.758 bw ( KiB/s): min= 6144, max=92160, per=1.28%, avg=42647.50, stdev=36704.72, samples=6 00:17:27.758 iops : min= 6, max= 90, avg=41.50, stdev=35.78, samples=6 00:17:27.758 lat (msec) : 50=0.37%, 100=4.43%, 250=2.21%, 2000=42.44%, >=2000=50.55% 00:17:27.758 cpu : usr=0.00%, sys=1.23%, ctx=801, majf=0, minf=32769 00:17:27.758 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=5.9%, 32=11.8%, >=64=76.8% 00:17:27.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.758 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:17:27.758 issued rwts: total=271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.758 job1: (groupid=0, jobs=1): err= 0: pid=4177785: Wed Nov 27 12:55:51 2024 00:17:27.758 read: IOPS=79, BW=79.9MiB/s (83.7MB/s)(803MiB/10054msec) 00:17:27.758 slat (usec): min=42, max=2168.7k, avg=12483.55, stdev=114085.31 00:17:27.758 clat (msec): min=27, max=7534, avg=1047.55, stdev=1087.76 00:17:27.758 lat (msec): min=54, max=7564, avg=1060.04, stdev=1107.42 00:17:27.758 clat percentiles (msec): 00:17:27.758 | 1.00th=[ 66], 5.00th=[ 288], 10.00th=[ 464], 20.00th=[ 481], 00:17:27.758 | 30.00th=[ 542], 40.00th=[ 567], 50.00th=[ 600], 60.00th=[ 642], 00:17:27.758 | 70.00th=[ 693], 80.00th=[ 911], 90.00th=[ 3272], 95.00th=[ 3339], 00:17:27.758 | 99.00th=[ 4077], 99.50th=[ 4077], 99.90th=[ 7550], 99.95th=[ 7550], 00:17:27.758 | 99.99th=[ 7550] 00:17:27.758 bw ( KiB/s): min=10240, max=266240, per=4.48%, avg=149196.38, stdev=94123.90, samples=8 00:17:27.758 iops : min= 10, max= 260, avg=145.62, stdev=91.87, samples=8 00:17:27.758 lat (msec) : 50=0.12%, 100=1.74%, 250=2.62%, 500=18.68%, 750=54.30% 00:17:27.758 lat (msec) : 1000=3.99%, 2000=1.25%, >=2000=17.31% 00:17:27.758 cpu : usr=0.03%, sys=1.17%, ctx=1559, majf=0, minf=32769 00:17:27.758 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:17:27.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.758 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.758 issued rwts: total=803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.758 job1: (groupid=0, jobs=1): err= 0: pid=4177786: Wed Nov 27 12:55:51 2024 00:17:27.758 read: IOPS=29, BW=29.6MiB/s (31.1MB/s)(300MiB/10131msec) 00:17:27.758 slat (usec): min=919, max=2125.9k, avg=33451.11, stdev=188592.77 00:17:27.758 clat (msec): min=93, max=7501, avg=3881.83, stdev=2393.83 00:17:27.758 lat (msec): min=152, max=7530, avg=3915.28, stdev=2390.64 00:17:27.758 clat percentiles (msec): 00:17:27.758 | 1.00th=[ 163], 5.00th=[ 1737], 10.00th=[ 1787], 20.00th=[ 1838], 00:17:27.758 | 30.00th=[ 1888], 40.00th=[ 1938], 50.00th=[ 2056], 60.00th=[ 6007], 00:17:27.758 | 70.00th=[ 6074], 80.00th=[ 6745], 90.00th=[ 7148], 95.00th=[ 7349], 00:17:27.758 | 99.00th=[ 7483], 99.50th=[ 7483], 99.90th=[ 7483], 99.95th=[ 7483], 00:17:27.758 | 99.99th=[ 7483] 00:17:27.758 bw ( KiB/s): min= 4087, max=81920, per=1.17%, avg=39124.00, stdev=34188.37, samples=9 00:17:27.758 iops : min= 3, max= 80, avg=38.00, stdev=33.43, samples=9 00:17:27.758 lat (msec) : 100=0.33%, 250=1.33%, 2000=43.67%, >=2000=54.67% 00:17:27.758 cpu : usr=0.04%, sys=1.30%, ctx=882, majf=0, minf=32769 00:17:27.758 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.3%, 32=10.7%, >=64=79.0% 00:17:27.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.758 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:27.758 issued rwts: total=300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.758 job2: (groupid=0, jobs=1): err= 0: pid=4177787: Wed Nov 27 12:55:51 2024 00:17:27.758 read: IOPS=17, BW=17.6MiB/s (18.5MB/s)(178MiB/10095msec) 00:17:27.758 slat (usec): min=1306, max=2155.6k, avg=56178.35, stdev=279004.65 00:17:27.758 clat (msec): min=93, max=7040, avg=3413.24, stdev=1490.08 00:17:27.758 lat (msec): min=102, max=7065, avg=3469.42, stdev=1490.75 00:17:27.758 clat percentiles (msec): 00:17:27.758 | 1.00th=[ 103], 5.00th=[ 1770], 10.00th=[ 1787], 20.00th=[ 2567], 00:17:27.758 | 30.00th=[ 2802], 40.00th=[ 3037], 50.00th=[ 3239], 60.00th=[ 3473], 00:17:27.758 | 70.00th=[ 3708], 80.00th=[ 3876], 90.00th=[ 6946], 95.00th=[ 7013], 00:17:27.758 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:17:27.758 | 99.99th=[ 7013] 00:17:27.758 bw ( KiB/s): min=12288, max=65536, per=1.04%, avg=34816.00, stdev=27553.02, samples=3 00:17:27.758 iops : min= 12, max= 64, avg=34.00, stdev=26.91, samples=3 00:17:27.758 lat (msec) : 100=0.56%, 250=2.81%, 2000=10.67%, >=2000=85.96% 00:17:27.758 cpu : usr=0.01%, sys=1.16%, ctx=598, majf=0, minf=32769 00:17:27.758 IO depths : 1=0.6%, 2=1.1%, 4=2.2%, 8=4.5%, 16=9.0%, 32=18.0%, >=64=64.6% 00:17:27.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.758 complete : 0=0.0%, 4=98.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.9% 00:17:27.758 issued rwts: total=178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.758 job2: (groupid=0, jobs=1): err= 0: pid=4177788: Wed Nov 27 12:55:51 2024 00:17:27.758 read: IOPS=19, BW=19.2MiB/s (20.1MB/s)(194MiB/10109msec) 00:17:27.758 slat (usec): min=1103, max=2119.2k, avg=51569.85, stdev=266047.52 00:17:27.758 clat (msec): min=103, max=7072, avg=3587.15, stdev=1689.99 00:17:27.758 lat (msec): min=118, max=7084, avg=3638.72, stdev=1684.90 00:17:27.758 clat percentiles (msec): 00:17:27.758 | 1.00th=[ 118], 5.00th=[ 1687], 10.00th=[ 1754], 20.00th=[ 2366], 00:17:27.758 | 30.00th=[ 2702], 40.00th=[ 2937], 50.00th=[ 3239], 60.00th=[ 3473], 00:17:27.758 | 70.00th=[ 3708], 80.00th=[ 3943], 90.00th=[ 7013], 95.00th=[ 7013], 00:17:27.758 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:17:27.758 | 99.99th=[ 7080] 00:17:27.758 bw ( KiB/s): min= 4096, max=75776, per=1.03%, avg=34304.00, stdev=31611.59, samples=4 00:17:27.758 iops : min= 4, max= 74, avg=33.50, stdev=30.87, samples=4 00:17:27.758 lat (msec) : 250=1.03%, 2000=14.95%, >=2000=84.02% 00:17:27.758 cpu : usr=0.02%, sys=1.28%, ctx=600, majf=0, minf=32769 00:17:27.758 IO depths : 1=0.5%, 2=1.0%, 4=2.1%, 8=4.1%, 16=8.2%, 32=16.5%, >=64=67.5% 00:17:27.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.758 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.5% 00:17:27.758 issued rwts: total=194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.758 job2: (groupid=0, jobs=1): err= 0: pid=4177789: Wed Nov 27 12:55:51 2024 00:17:27.758 read: IOPS=90, BW=90.0MiB/s (94.4MB/s)(905MiB/10055msec) 00:17:27.758 slat (usec): min=67, max=2090.2k, avg=11049.63, stdev=108838.28 00:17:27.758 clat (msec): min=51, max=6604, avg=1207.45, stdev=1619.83 00:17:27.758 lat (msec): min=57, max=7382, avg=1218.50, stdev=1628.09 00:17:27.758 clat percentiles (msec): 00:17:27.758 | 1.00th=[ 136], 5.00th=[ 226], 10.00th=[ 232], 20.00th=[ 249], 00:17:27.758 | 30.00th=[ 257], 40.00th=[ 271], 50.00th=[ 284], 60.00th=[ 1045], 00:17:27.758 | 70.00th=[ 1167], 80.00th=[ 1586], 90.00th=[ 4799], 95.00th=[ 5201], 00:17:27.758 | 99.00th=[ 5403], 99.50th=[ 5403], 99.90th=[ 6611], 99.95th=[ 6611], 00:17:27.758 | 99.99th=[ 6611] 00:17:27.758 bw ( KiB/s): min= 6144, max=543680, per=4.66%, avg=155311.00, stdev=169129.88, samples=10 00:17:27.758 iops : min= 6, max= 530, avg=151.50, stdev=164.95, samples=10 00:17:27.758 lat (msec) : 100=0.99%, 250=19.89%, 500=38.56%, 2000=26.19%, >=2000=14.36% 00:17:27.758 cpu : usr=0.00%, sys=1.00%, ctx=1840, majf=0, minf=32769 00:17:27.758 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:17:27.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.758 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.758 issued rwts: total=905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.758 job2: (groupid=0, jobs=1): err= 0: pid=4177790: Wed Nov 27 12:55:51 2024 00:17:27.758 read: IOPS=27, BW=27.7MiB/s (29.0MB/s)(279MiB/10090msec) 00:17:27.758 slat (usec): min=56, max=2095.5k, avg=35855.30, stdev=222551.46 00:17:27.758 clat (msec): min=84, max=6633, avg=2608.45, stdev=1646.29 00:17:27.758 lat (msec): min=141, max=6655, avg=2644.30, stdev=1654.32 00:17:27.758 clat percentiles (msec): 00:17:27.758 | 1.00th=[ 150], 5.00th=[ 1133], 10.00th=[ 1217], 20.00th=[ 1250], 00:17:27.758 | 30.00th=[ 1267], 40.00th=[ 2299], 50.00th=[ 2500], 60.00th=[ 2702], 00:17:27.758 | 70.00th=[ 3004], 80.00th=[ 3239], 90.00th=[ 6409], 95.00th=[ 6477], 00:17:27.758 | 99.00th=[ 6611], 99.50th=[ 6611], 99.90th=[ 6611], 99.95th=[ 6611], 00:17:27.758 | 99.99th=[ 6611] 00:17:27.758 bw ( KiB/s): min=12288, max=106496, per=1.87%, avg=62251.80, stdev=43756.97, samples=5 00:17:27.758 iops : min= 12, max= 104, avg=60.60, stdev=42.97, samples=5 00:17:27.758 lat (msec) : 100=0.36%, 250=2.87%, 2000=36.56%, >=2000=60.22% 00:17:27.758 cpu : usr=0.04%, sys=0.99%, ctx=611, majf=0, minf=32769 00:17:27.758 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.7%, 32=11.5%, >=64=77.4% 00:17:27.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.758 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:17:27.758 issued rwts: total=279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.758 job2: (groupid=0, jobs=1): err= 0: pid=4177791: Wed Nov 27 12:55:51 2024 00:17:27.758 read: IOPS=148, BW=148MiB/s (155MB/s)(1494MiB/10079msec) 00:17:27.758 slat (usec): min=43, max=2068.5k, avg=6686.98, stdev=75564.28 00:17:27.758 clat (msec): min=77, max=4980, avg=826.99, stdev=1186.69 00:17:27.758 lat (msec): min=79, max=4983, avg=833.68, stdev=1191.44 00:17:27.758 clat percentiles (msec): 00:17:27.758 | 1.00th=[ 155], 5.00th=[ 300], 10.00th=[ 342], 20.00th=[ 376], 00:17:27.758 | 30.00th=[ 380], 40.00th=[ 384], 50.00th=[ 397], 60.00th=[ 477], 00:17:27.758 | 70.00th=[ 617], 80.00th=[ 651], 90.00th=[ 785], 95.00th=[ 4732], 00:17:27.758 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 5000], 99.95th=[ 5000], 00:17:27.758 | 99.99th=[ 5000] 00:17:27.758 bw ( KiB/s): min= 4096, max=344064, per=6.46%, avg=215205.31, stdev=121975.22, samples=13 00:17:27.758 iops : min= 4, max= 336, avg=210.00, stdev=119.22, samples=13 00:17:27.758 lat (msec) : 100=0.33%, 250=2.01%, 500=60.58%, 750=26.17%, 1000=1.47% 00:17:27.758 lat (msec) : >=2000=9.44% 00:17:27.758 cpu : usr=0.05%, sys=2.55%, ctx=1303, majf=0, minf=32769 00:17:27.758 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:17:27.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.758 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.758 issued rwts: total=1494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.759 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.759 job2: (groupid=0, jobs=1): err= 0: pid=4177792: Wed Nov 27 12:55:51 2024 00:17:27.759 read: IOPS=42, BW=42.3MiB/s (44.3MB/s)(426MiB/10081msec) 00:17:27.759 slat (usec): min=47, max=2109.9k, avg=23471.65, stdev=180979.28 00:17:27.759 clat (msec): min=79, max=8586, avg=1806.85, stdev=2807.82 00:17:27.759 lat (msec): min=85, max=8592, avg=1830.33, stdev=2825.79 00:17:27.759 clat percentiles (msec): 00:17:27.759 | 1.00th=[ 99], 5.00th=[ 178], 10.00th=[ 305], 20.00th=[ 498], 00:17:27.759 | 30.00th=[ 634], 40.00th=[ 642], 50.00th=[ 642], 60.00th=[ 667], 00:17:27.759 | 70.00th=[ 693], 80.00th=[ 827], 90.00th=[ 8490], 95.00th=[ 8557], 00:17:27.759 | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:17:27.759 | 99.99th=[ 8557] 00:17:27.759 bw ( KiB/s): min=24576, max=202752, per=4.59%, avg=153088.00, stdev=85875.70, samples=4 00:17:27.759 iops : min= 24, max= 198, avg=149.50, stdev=83.86, samples=4 00:17:27.759 lat (msec) : 100=1.17%, 250=7.04%, 500=12.21%, 750=57.51%, 1000=4.93% 00:17:27.759 lat (msec) : 2000=0.23%, >=2000=16.90% 00:17:27.759 cpu : usr=0.03%, sys=1.35%, ctx=498, majf=0, minf=32769 00:17:27.759 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.5%, >=64=85.2% 00:17:27.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.759 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:27.759 issued rwts: total=426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.759 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.759 job2: (groupid=0, jobs=1): err= 0: pid=4177793: Wed Nov 27 12:55:51 2024 00:17:27.759 read: IOPS=112, BW=113MiB/s (118MB/s)(1144MiB/10125msec) 00:17:27.759 slat (usec): min=43, max=2080.2k, avg=8753.25, stdev=86622.08 00:17:27.759 clat (msec): min=102, max=5553, avg=1081.44, stdev=1383.48 00:17:27.759 lat (msec): min=387, max=5575, avg=1090.19, stdev=1388.51 00:17:27.759 clat percentiles (msec): 00:17:27.759 | 1.00th=[ 388], 5.00th=[ 388], 10.00th=[ 393], 20.00th=[ 397], 00:17:27.759 | 30.00th=[ 401], 40.00th=[ 409], 50.00th=[ 472], 60.00th=[ 659], 00:17:27.759 | 70.00th=[ 760], 80.00th=[ 1167], 90.00th=[ 4396], 95.00th=[ 4933], 00:17:27.759 | 99.00th=[ 5470], 99.50th=[ 5537], 99.90th=[ 5537], 99.95th=[ 5537], 00:17:27.759 | 99.99th=[ 5537] 00:17:27.759 bw ( KiB/s): min= 8192, max=337920, per=5.20%, avg=173344.50, stdev=119598.49, samples=12 00:17:27.759 iops : min= 8, max= 330, avg=169.25, stdev=116.75, samples=12 00:17:27.759 lat (msec) : 250=0.09%, 500=50.70%, 750=18.88%, 1000=7.60%, 2000=11.28% 00:17:27.759 lat (msec) : >=2000=11.45% 00:17:27.759 cpu : usr=0.04%, sys=2.54%, ctx=1281, majf=0, minf=32769 00:17:27.759 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:17:27.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.759 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.759 issued rwts: total=1144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.759 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.759 job2: (groupid=0, jobs=1): err= 0: pid=4177794: Wed Nov 27 12:55:51 2024 00:17:27.759 read: IOPS=169, BW=170MiB/s (178MB/s)(1715MiB/10105msec) 00:17:27.759 slat (usec): min=41, max=2089.4k, avg=5828.85, stdev=83713.61 00:17:27.759 clat (msec): min=103, max=6719, avg=712.20, stdev=1636.10 00:17:27.759 lat (msec): min=114, max=6719, avg=718.03, stdev=1642.02 00:17:27.759 clat percentiles (msec): 00:17:27.759 | 1.00th=[ 124], 5.00th=[ 125], 10.00th=[ 125], 20.00th=[ 126], 00:17:27.759 | 30.00th=[ 128], 40.00th=[ 207], 50.00th=[ 226], 60.00th=[ 243], 00:17:27.759 | 70.00th=[ 266], 80.00th=[ 498], 90.00th=[ 575], 95.00th=[ 6611], 00:17:27.759 | 99.00th=[ 6678], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:17:27.759 | 99.99th=[ 6745] 00:17:27.759 bw ( KiB/s): min= 2048, max=987136, per=8.87%, avg=295613.91, stdev=355137.71, samples=11 00:17:27.759 iops : min= 2, max= 964, avg=288.55, stdev=346.90, samples=11 00:17:27.759 lat (msec) : 250=66.06%, 500=15.39%, 750=9.91%, 1000=0.35%, 2000=0.12% 00:17:27.759 lat (msec) : >=2000=8.16% 00:17:27.759 cpu : usr=0.03%, sys=2.09%, ctx=2393, majf=0, minf=32769 00:17:27.759 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:17:27.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.759 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.759 issued rwts: total=1715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.759 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.759 job2: (groupid=0, jobs=1): err= 0: pid=4177795: Wed Nov 27 12:55:51 2024 00:17:27.759 read: IOPS=5, BW=5675KiB/s (5811kB/s)(56.0MiB/10105msec) 00:17:27.759 slat (usec): min=1065, max=2114.5k, avg=178613.50, stdev=555762.05 00:17:27.759 clat (msec): min=102, max=10076, avg=5180.53, stdev=4029.59 00:17:27.759 lat (msec): min=105, max=10104, avg=5359.14, stdev=4022.09 00:17:27.759 clat percentiles (msec): 00:17:27.759 | 1.00th=[ 103], 5.00th=[ 109], 10.00th=[ 122], 20.00th=[ 148], 00:17:27.759 | 30.00th=[ 2265], 40.00th=[ 2333], 50.00th=[ 4463], 60.00th=[ 6611], 00:17:27.759 | 70.00th=[ 8792], 80.00th=[10000], 90.00th=[10000], 95.00th=[10000], 00:17:27.759 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:27.759 | 99.99th=[10134] 00:17:27.759 lat (msec) : 250=25.00%, >=2000=75.00% 00:17:27.759 cpu : usr=0.00%, sys=0.57%, ctx=79, majf=0, minf=14337 00:17:27.759 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:17:27.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.759 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:27.759 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.759 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.759 job2: (groupid=0, jobs=1): err= 0: pid=4177796: Wed Nov 27 12:55:51 2024 00:17:27.759 read: IOPS=2, BW=2646KiB/s (2710kB/s)(26.0MiB/10061msec) 00:17:27.759 slat (usec): min=936, max=2150.9k, avg=384664.40, stdev=784705.87 00:17:27.759 clat (msec): min=58, max=10057, avg=6026.22, stdev=3592.31 00:17:27.759 lat (msec): min=123, max=10060, avg=6410.88, stdev=3460.82 00:17:27.759 clat percentiles (msec): 00:17:27.759 | 1.00th=[ 59], 5.00th=[ 124], 10.00th=[ 125], 20.00th=[ 2333], 00:17:27.759 | 30.00th=[ 2366], 40.00th=[ 4463], 50.00th=[ 6611], 60.00th=[ 8658], 00:17:27.759 | 70.00th=[ 8792], 80.00th=[ 9866], 90.00th=[10000], 95.00th=[10000], 00:17:27.759 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:17:27.759 | 99.99th=[10000] 00:17:27.759 lat (msec) : 100=3.85%, 250=7.69%, >=2000=88.46% 00:17:27.759 cpu : usr=0.00%, sys=0.28%, ctx=65, majf=0, minf=6657 00:17:27.759 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:17:27.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.759 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:27.759 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.759 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.759 job2: (groupid=0, jobs=1): err= 0: pid=4177797: Wed Nov 27 12:55:51 2024 00:17:27.759 read: IOPS=42, BW=42.4MiB/s (44.4MB/s)(430MiB/10149msec) 00:17:27.759 slat (usec): min=41, max=2082.0k, avg=23419.45, stdev=181571.26 00:17:27.759 clat (msec): min=75, max=6209, avg=1773.37, stdev=1517.91 00:17:27.759 lat (msec): min=155, max=6531, avg=1796.79, stdev=1541.78 00:17:27.759 clat percentiles (msec): 00:17:27.759 | 1.00th=[ 184], 5.00th=[ 667], 10.00th=[ 667], 20.00th=[ 676], 00:17:27.759 | 30.00th=[ 693], 40.00th=[ 735], 50.00th=[ 776], 60.00th=[ 2366], 00:17:27.759 | 70.00th=[ 2601], 80.00th=[ 2769], 90.00th=[ 2937], 95.00th=[ 6141], 00:17:27.759 | 99.00th=[ 6208], 99.50th=[ 6208], 99.90th=[ 6208], 99.95th=[ 6208], 00:17:27.759 | 99.99th=[ 6208] 00:17:27.759 bw ( KiB/s): min= 8192, max=188416, per=3.71%, avg=123646.80, stdev=73957.28, samples=5 00:17:27.759 iops : min= 8, max= 184, avg=120.60, stdev=72.21, samples=5 00:17:27.759 lat (msec) : 100=0.23%, 250=0.93%, 750=43.49%, 1000=13.95%, >=2000=41.40% 00:17:27.759 cpu : usr=0.05%, sys=1.43%, ctx=375, majf=0, minf=32769 00:17:27.759 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.4%, >=64=85.3% 00:17:27.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.759 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:27.759 issued rwts: total=430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.759 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.759 job2: (groupid=0, jobs=1): err= 0: pid=4177798: Wed Nov 27 12:55:51 2024 00:17:27.759 read: IOPS=5, BW=5995KiB/s (6139kB/s)(59.0MiB/10078msec) 00:17:27.759 slat (usec): min=554, max=2095.3k, avg=169530.25, stdev=522591.11 00:17:27.759 clat (msec): min=75, max=10049, avg=3372.02, stdev=3087.64 00:17:27.759 lat (msec): min=128, max=10077, avg=3541.55, stdev=3176.83 00:17:27.759 clat percentiles (msec): 00:17:27.759 | 1.00th=[ 75], 5.00th=[ 133], 10.00th=[ 146], 20.00th=[ 180], 00:17:27.759 | 30.00th=[ 1989], 40.00th=[ 2072], 50.00th=[ 2165], 60.00th=[ 2299], 00:17:27.759 | 70.00th=[ 4396], 80.00th=[ 6544], 90.00th=[ 8792], 95.00th=[10000], 00:17:27.759 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:17:27.760 | 99.99th=[10000] 00:17:27.760 lat (msec) : 100=1.69%, 250=18.64%, 2000=13.56%, >=2000=66.10% 00:17:27.760 cpu : usr=0.00%, sys=0.38%, ctx=134, majf=0, minf=15105 00:17:27.760 IO depths : 1=1.7%, 2=3.4%, 4=6.8%, 8=13.6%, 16=27.1%, 32=47.5%, >=64=0.0% 00:17:27.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.760 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:27.760 issued rwts: total=59,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.760 job2: (groupid=0, jobs=1): err= 0: pid=4177799: Wed Nov 27 12:55:51 2024 00:17:27.760 read: IOPS=5, BW=5669KiB/s (5805kB/s)(56.0MiB/10116msec) 00:17:27.760 slat (usec): min=402, max=2130.0k, avg=178765.56, stdev=559151.13 00:17:27.760 clat (msec): min=104, max=10114, avg=8398.68, stdev=3015.40 00:17:27.760 lat (msec): min=123, max=10115, avg=8577.44, stdev=2804.10 00:17:27.760 clat percentiles (msec): 00:17:27.760 | 1.00th=[ 105], 5.00th=[ 2198], 10.00th=[ 2232], 20.00th=[ 4396], 00:17:27.760 | 30.00th=[ 9866], 40.00th=[ 9866], 50.00th=[10000], 60.00th=[10000], 00:17:27.760 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:17:27.760 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:27.760 | 99.99th=[10134] 00:17:27.760 lat (msec) : 250=3.57%, >=2000=96.43% 00:17:27.760 cpu : usr=0.00%, sys=0.63%, ctx=109, majf=0, minf=14337 00:17:27.760 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:17:27.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.760 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:27.760 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.760 job3: (groupid=0, jobs=1): err= 0: pid=4177800: Wed Nov 27 12:55:51 2024 00:17:27.760 read: IOPS=28, BW=28.7MiB/s (30.1MB/s)(290MiB/10094msec) 00:17:27.760 slat (usec): min=440, max=2109.1k, avg=34476.97, stdev=222138.44 00:17:27.760 clat (msec): min=92, max=8656, avg=1436.11, stdev=1902.82 00:17:27.760 lat (msec): min=94, max=8659, avg=1470.59, stdev=1948.19 00:17:27.760 clat percentiles (msec): 00:17:27.760 | 1.00th=[ 105], 5.00th=[ 266], 10.00th=[ 393], 20.00th=[ 634], 00:17:27.760 | 30.00th=[ 877], 40.00th=[ 995], 50.00th=[ 1020], 60.00th=[ 1036], 00:17:27.760 | 70.00th=[ 1045], 80.00th=[ 1070], 90.00th=[ 1200], 95.00th=[ 7349], 00:17:27.760 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:17:27.760 | 99.99th=[ 8658] 00:17:27.760 bw ( KiB/s): min=103413, max=122880, per=3.33%, avg=110929.67, stdev=10463.46, samples=3 00:17:27.760 iops : min= 100, max= 120, avg=108.00, stdev=10.58, samples=3 00:17:27.760 lat (msec) : 100=0.69%, 250=3.79%, 500=9.31%, 750=10.69%, 1000=17.59% 00:17:27.760 lat (msec) : 2000=48.28%, >=2000=9.66% 00:17:27.760 cpu : usr=0.00%, sys=1.23%, ctx=527, majf=0, minf=32769 00:17:27.760 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.5%, 32=11.0%, >=64=78.3% 00:17:27.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.760 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:27.760 issued rwts: total=290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.760 job3: (groupid=0, jobs=1): err= 0: pid=4177801: Wed Nov 27 12:55:51 2024 00:17:27.760 read: IOPS=5, BW=5160KiB/s (5284kB/s)(51.0MiB/10120msec) 00:17:27.760 slat (usec): min=831, max=3321.3k, avg=196937.33, stdev=665203.04 00:17:27.760 clat (msec): min=75, max=10106, avg=5133.82, stdev=3624.52 00:17:27.760 lat (msec): min=132, max=10119, avg=5330.76, stdev=3617.05 00:17:27.760 clat percentiles (msec): 00:17:27.760 | 1.00th=[ 77], 5.00th=[ 142], 10.00th=[ 165], 20.00th=[ 2299], 00:17:27.760 | 30.00th=[ 2333], 40.00th=[ 2333], 50.00th=[ 4396], 60.00th=[ 4463], 00:17:27.760 | 70.00th=[ 9866], 80.00th=[10000], 90.00th=[10000], 95.00th=[10134], 00:17:27.760 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:27.760 | 99.99th=[10134] 00:17:27.760 lat (msec) : 100=1.96%, 250=9.80%, >=2000=88.24% 00:17:27.760 cpu : usr=0.00%, sys=0.46%, ctx=92, majf=0, minf=13057 00:17:27.760 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:17:27.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.760 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:27.760 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.760 job3: (groupid=0, jobs=1): err= 0: pid=4177802: Wed Nov 27 12:55:51 2024 00:17:27.760 read: IOPS=214, BW=215MiB/s (225MB/s)(2158MiB/10058msec) 00:17:27.760 slat (usec): min=36, max=2064.1k, avg=4631.15, stdev=62620.37 00:17:27.760 clat (msec): min=54, max=4674, avg=566.93, stdev=991.76 00:17:27.760 lat (msec): min=57, max=4675, avg=571.56, stdev=995.16 00:17:27.760 clat percentiles (msec): 00:17:27.760 | 1.00th=[ 224], 5.00th=[ 228], 10.00th=[ 230], 20.00th=[ 232], 00:17:27.760 | 30.00th=[ 234], 40.00th=[ 239], 50.00th=[ 334], 60.00th=[ 380], 00:17:27.760 | 70.00th=[ 380], 80.00th=[ 401], 90.00th=[ 510], 95.00th=[ 4396], 00:17:27.760 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:17:27.760 | 99.99th=[ 4665] 00:17:27.760 bw ( KiB/s): min=10240, max=563200, per=10.30%, avg=343324.08, stdev=189904.18, samples=12 00:17:27.760 iops : min= 10, max= 550, avg=335.25, stdev=185.45, samples=12 00:17:27.760 lat (msec) : 100=0.56%, 250=45.74%, 500=43.10%, 750=4.49%, >=2000=6.12% 00:17:27.760 cpu : usr=0.10%, sys=2.36%, ctx=2097, majf=0, minf=32769 00:17:27.760 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:17:27.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.760 issued rwts: total=2158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.760 job3: (groupid=0, jobs=1): err= 0: pid=4177803: Wed Nov 27 12:55:51 2024 00:17:27.760 read: IOPS=8, BW=8446KiB/s (8649kB/s)(83.0MiB/10063msec) 00:17:27.760 slat (usec): min=899, max=2117.2k, avg=120863.78, stdev=463203.48 00:17:27.760 clat (msec): min=30, max=10061, avg=5076.10, stdev=4359.66 00:17:27.760 lat (msec): min=76, max=10062, avg=5196.97, stdev=4357.13 00:17:27.760 clat percentiles (msec): 00:17:27.760 | 1.00th=[ 31], 5.00th=[ 82], 10.00th=[ 90], 20.00th=[ 104], 00:17:27.760 | 30.00th=[ 171], 40.00th=[ 2299], 50.00th=[ 6611], 60.00th=[ 8658], 00:17:27.760 | 70.00th=[ 8792], 80.00th=[10000], 90.00th=[10000], 95.00th=[10000], 00:17:27.760 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:17:27.760 | 99.99th=[10000] 00:17:27.760 lat (msec) : 50=1.20%, 100=15.66%, 250=22.89%, >=2000=60.24% 00:17:27.760 cpu : usr=0.00%, sys=0.81%, ctx=71, majf=0, minf=21249 00:17:27.760 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.6%, 16=19.3%, 32=38.6%, >=64=24.1% 00:17:27.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.760 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:27.760 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.760 job3: (groupid=0, jobs=1): err= 0: pid=4177804: Wed Nov 27 12:55:51 2024 00:17:27.760 read: IOPS=2, BW=3039KiB/s (3112kB/s)(30.0MiB/10109msec) 00:17:27.760 slat (usec): min=911, max=2150.9k, avg=334856.08, stdev=744079.95 00:17:27.760 clat (msec): min=62, max=10106, avg=7613.38, stdev=3601.51 00:17:27.760 lat (msec): min=156, max=10108, avg=7948.23, stdev=3332.20 00:17:27.760 clat percentiles (msec): 00:17:27.760 | 1.00th=[ 63], 5.00th=[ 157], 10.00th=[ 174], 20.00th=[ 4396], 00:17:27.760 | 30.00th=[ 4530], 40.00th=[ 8792], 50.00th=[10000], 60.00th=[10134], 00:17:27.760 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:17:27.760 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:27.760 | 99.99th=[10134] 00:17:27.760 lat (msec) : 100=3.33%, 250=6.67%, >=2000=90.00% 00:17:27.760 cpu : usr=0.00%, sys=0.26%, ctx=83, majf=0, minf=7681 00:17:27.760 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:17:27.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.760 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:27.760 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.760 job3: (groupid=0, jobs=1): err= 0: pid=4177805: Wed Nov 27 12:55:51 2024 00:17:27.760 read: IOPS=2, BW=2643KiB/s (2706kB/s)(26.0MiB/10074msec) 00:17:27.760 slat (usec): min=1330, max=2207.7k, avg=385013.53, stdev=791378.55 00:17:27.760 clat (msec): min=62, max=10062, avg=5984.32, stdev=3706.46 00:17:27.760 lat (msec): min=152, max=10072, avg=6369.33, stdev=3584.64 00:17:27.760 clat percentiles (msec): 00:17:27.760 | 1.00th=[ 63], 5.00th=[ 153], 10.00th=[ 190], 20.00th=[ 2333], 00:17:27.760 | 30.00th=[ 2333], 40.00th=[ 4396], 50.00th=[ 4396], 60.00th=[ 8792], 00:17:27.760 | 70.00th=[10000], 80.00th=[10000], 90.00th=[10000], 95.00th=[10000], 00:17:27.760 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:17:27.760 | 99.99th=[10000] 00:17:27.760 lat (msec) : 100=3.85%, 250=7.69%, >=2000=88.46% 00:17:27.760 cpu : usr=0.01%, sys=0.16%, ctx=80, majf=0, minf=6657 00:17:27.760 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:17:27.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.760 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:27.760 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.760 job3: (groupid=0, jobs=1): err= 0: pid=4177806: Wed Nov 27 12:55:51 2024 00:17:27.760 read: IOPS=10, BW=10.9MiB/s (11.4MB/s)(110MiB/10107msec) 00:17:27.760 slat (usec): min=525, max=2104.7k, avg=90926.82, stdev=391261.82 00:17:27.760 clat (msec): min=104, max=10100, avg=8410.90, stdev=2652.56 00:17:27.760 lat (msec): min=115, max=10106, avg=8501.83, stdev=2533.98 00:17:27.760 clat percentiles (msec): 00:17:27.760 | 1.00th=[ 116], 5.00th=[ 2265], 10.00th=[ 4463], 20.00th=[ 6544], 00:17:27.760 | 30.00th=[ 9463], 40.00th=[ 9597], 50.00th=[ 9731], 60.00th=[ 9731], 00:17:27.760 | 70.00th=[ 9866], 80.00th=[10000], 90.00th=[10134], 95.00th=[10134], 00:17:27.760 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:27.760 | 99.99th=[10134] 00:17:27.760 lat (msec) : 250=3.64%, >=2000=96.36% 00:17:27.760 cpu : usr=0.01%, sys=0.82%, ctx=206, majf=0, minf=28161 00:17:27.761 IO depths : 1=0.9%, 2=1.8%, 4=3.6%, 8=7.3%, 16=14.5%, 32=29.1%, >=64=42.7% 00:17:27.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.761 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:27.761 issued rwts: total=110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.761 job3: (groupid=0, jobs=1): err= 0: pid=4177807: Wed Nov 27 12:55:51 2024 00:17:27.761 read: IOPS=38, BW=38.5MiB/s (40.4MB/s)(390MiB/10129msec) 00:17:27.761 slat (usec): min=60, max=2141.0k, avg=25654.52, stdev=188264.19 00:17:27.761 clat (msec): min=120, max=8911, avg=3179.12, stdev=2969.56 00:17:27.761 lat (msec): min=137, max=8914, avg=3204.78, stdev=2977.80 00:17:27.761 clat percentiles (msec): 00:17:27.761 | 1.00th=[ 150], 5.00th=[ 498], 10.00th=[ 502], 20.00th=[ 527], 00:17:27.761 | 30.00th=[ 542], 40.00th=[ 542], 50.00th=[ 2769], 60.00th=[ 3339], 00:17:27.761 | 70.00th=[ 5403], 80.00th=[ 6141], 90.00th=[ 8792], 95.00th=[ 8792], 00:17:27.761 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:17:27.761 | 99.99th=[ 8926] 00:17:27.761 bw ( KiB/s): min= 6144, max=200303, per=2.02%, avg=67277.88, stdev=75373.26, samples=8 00:17:27.761 iops : min= 6, max= 195, avg=65.62, stdev=73.45, samples=8 00:17:27.761 lat (msec) : 250=1.03%, 500=5.64%, 750=40.51%, >=2000=52.82% 00:17:27.761 cpu : usr=0.02%, sys=1.73%, ctx=580, majf=0, minf=32769 00:17:27.761 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.1%, 32=8.2%, >=64=83.8% 00:17:27.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.761 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:27.761 issued rwts: total=390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.761 job3: (groupid=0, jobs=1): err= 0: pid=4177808: Wed Nov 27 12:55:51 2024 00:17:27.761 read: IOPS=6, BW=6373KiB/s (6526kB/s)(63.0MiB/10122msec) 00:17:27.761 slat (usec): min=792, max=2109.0k, avg=158933.45, stdev=528273.06 00:17:27.761 clat (msec): min=108, max=10118, avg=8108.40, stdev=3013.66 00:17:27.761 lat (msec): min=2180, max=10121, avg=8267.33, stdev=2844.21 00:17:27.761 clat percentiles (msec): 00:17:27.761 | 1.00th=[ 109], 5.00th=[ 2198], 10.00th=[ 2232], 20.00th=[ 4396], 00:17:27.761 | 30.00th=[ 6544], 40.00th=[10000], 50.00th=[10000], 60.00th=[10134], 00:17:27.761 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:17:27.761 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:27.761 | 99.99th=[10134] 00:17:27.761 lat (msec) : 250=1.59%, >=2000=98.41% 00:17:27.761 cpu : usr=0.02%, sys=0.66%, ctx=109, majf=0, minf=16129 00:17:27.761 IO depths : 1=1.6%, 2=3.2%, 4=6.3%, 8=12.7%, 16=25.4%, 32=50.8%, >=64=0.0% 00:17:27.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.761 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:27.761 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.761 job3: (groupid=0, jobs=1): err= 0: pid=4177809: Wed Nov 27 12:55:51 2024 00:17:27.761 read: IOPS=85, BW=85.1MiB/s (89.3MB/s)(862MiB/10126msec) 00:17:27.761 slat (usec): min=44, max=2071.1k, avg=11675.12, stdev=112230.73 00:17:27.761 clat (msec): min=56, max=5921, avg=978.35, stdev=1217.05 00:17:27.761 lat (msec): min=144, max=5979, avg=990.03, stdev=1228.13 00:17:27.761 clat percentiles (msec): 00:17:27.761 | 1.00th=[ 401], 5.00th=[ 418], 10.00th=[ 443], 20.00th=[ 498], 00:17:27.761 | 30.00th=[ 502], 40.00th=[ 506], 50.00th=[ 510], 60.00th=[ 510], 00:17:27.761 | 70.00th=[ 542], 80.00th=[ 1418], 90.00th=[ 1754], 95.00th=[ 4732], 00:17:27.761 | 99.00th=[ 5805], 99.50th=[ 5873], 99.90th=[ 5940], 99.95th=[ 5940], 00:17:27.761 | 99.99th=[ 5940] 00:17:27.761 bw ( KiB/s): min= 4096, max=303104, per=5.64%, avg=187891.62, stdev=110299.18, samples=8 00:17:27.761 iops : min= 4, max= 296, avg=183.38, stdev=107.88, samples=8 00:17:27.761 lat (msec) : 100=0.12%, 250=0.23%, 500=24.48%, 750=53.25%, 2000=14.73% 00:17:27.761 lat (msec) : >=2000=7.19% 00:17:27.761 cpu : usr=0.06%, sys=1.75%, ctx=820, majf=0, minf=32769 00:17:27.761 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.7% 00:17:27.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.761 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.761 issued rwts: total=862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.761 job3: (groupid=0, jobs=1): err= 0: pid=4177810: Wed Nov 27 12:55:51 2024 00:17:27.761 read: IOPS=27, BW=27.4MiB/s (28.8MB/s)(276MiB/10063msec) 00:17:27.761 slat (usec): min=75, max=2101.6k, avg=36311.40, stdev=225857.43 00:17:27.761 clat (msec): min=38, max=8765, avg=1117.88, stdev=1267.29 00:17:27.761 lat (msec): min=69, max=8774, avg=1154.19, stdev=1348.14 00:17:27.761 clat percentiles (msec): 00:17:27.761 | 1.00th=[ 73], 5.00th=[ 201], 10.00th=[ 321], 20.00th=[ 542], 00:17:27.761 | 30.00th=[ 751], 40.00th=[ 986], 50.00th=[ 1036], 60.00th=[ 1053], 00:17:27.761 | 70.00th=[ 1070], 80.00th=[ 1099], 90.00th=[ 1133], 95.00th=[ 3272], 00:17:27.761 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8792], 99.95th=[ 8792], 00:17:27.761 | 99.99th=[ 8792] 00:17:27.761 bw ( KiB/s): min=61440, max=126976, per=2.83%, avg=94208.00, stdev=46340.95, samples=2 00:17:27.761 iops : min= 60, max= 124, avg=92.00, stdev=45.25, samples=2 00:17:27.761 lat (msec) : 50=0.36%, 100=2.90%, 250=3.62%, 500=11.96%, 750=10.87% 00:17:27.761 lat (msec) : 1000=11.23%, 2000=53.26%, >=2000=5.80% 00:17:27.761 cpu : usr=0.01%, sys=1.17%, ctx=519, majf=0, minf=32769 00:17:27.761 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.8%, 32=11.6%, >=64=77.2% 00:17:27.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.761 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:17:27.761 issued rwts: total=276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.761 job3: (groupid=0, jobs=1): err= 0: pid=4177811: Wed Nov 27 12:55:51 2024 00:17:27.761 read: IOPS=1, BW=1829KiB/s (1873kB/s)(18.0MiB/10076msec) 00:17:27.761 slat (msec): min=9, max=2121, avg=556.46, stdev=891.45 00:17:27.761 clat (msec): min=58, max=9976, avg=4469.88, stdev=3620.09 00:17:27.761 lat (msec): min=132, max=10075, avg=5026.35, stdev=3671.65 00:17:27.761 clat percentiles (msec): 00:17:27.761 | 1.00th=[ 59], 5.00th=[ 59], 10.00th=[ 133], 20.00th=[ 174], 00:17:27.761 | 30.00th=[ 2299], 40.00th=[ 4396], 50.00th=[ 4463], 60.00th=[ 4463], 00:17:27.761 | 70.00th=[ 6611], 80.00th=[ 8792], 90.00th=[10000], 95.00th=[10000], 00:17:27.761 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:17:27.761 | 99.99th=[10000] 00:17:27.761 lat (msec) : 100=5.56%, 250=22.22%, >=2000=72.22% 00:17:27.761 cpu : usr=0.00%, sys=0.13%, ctx=73, majf=0, minf=4609 00:17:27.761 IO depths : 1=5.6%, 2=11.1%, 4=22.2%, 8=44.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:17:27.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.761 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:27.761 issued rwts: total=18,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.761 job3: (groupid=0, jobs=1): err= 0: pid=4177812: Wed Nov 27 12:55:51 2024 00:17:27.761 read: IOPS=128, BW=129MiB/s (135MB/s)(1301MiB/10114msec) 00:17:27.761 slat (usec): min=44, max=2050.5k, avg=7681.06, stdev=80082.05 00:17:27.761 clat (msec): min=112, max=2956, avg=957.83, stdev=912.35 00:17:27.761 lat (msec): min=114, max=2958, avg=965.51, stdev=915.56 00:17:27.761 clat percentiles (msec): 00:17:27.761 | 1.00th=[ 161], 5.00th=[ 313], 10.00th=[ 384], 20.00th=[ 388], 00:17:27.761 | 30.00th=[ 451], 40.00th=[ 502], 50.00th=[ 550], 60.00th=[ 609], 00:17:27.761 | 70.00th=[ 693], 80.00th=[ 827], 90.00th=[ 2769], 95.00th=[ 2836], 00:17:27.761 | 99.00th=[ 2903], 99.50th=[ 2903], 99.90th=[ 2937], 99.95th=[ 2970], 00:17:27.761 | 99.99th=[ 2970] 00:17:27.761 bw ( KiB/s): min=26624, max=331776, per=5.55%, avg=184950.15, stdev=94442.87, samples=13 00:17:27.761 iops : min= 26, max= 324, avg=180.62, stdev=92.23, samples=13 00:17:27.761 lat (msec) : 250=3.38%, 500=34.59%, 750=33.74%, 1000=8.76%, >=2000=19.52% 00:17:27.761 cpu : usr=0.11%, sys=2.60%, ctx=1252, majf=0, minf=32770 00:17:27.761 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.2% 00:17:27.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.761 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.761 issued rwts: total=1301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.761 job4: (groupid=0, jobs=1): err= 0: pid=4177813: Wed Nov 27 12:55:51 2024 00:17:27.761 read: IOPS=37, BW=37.2MiB/s (39.0MB/s)(377MiB/10124msec) 00:17:27.761 slat (usec): min=82, max=2068.6k, avg=26521.10, stdev=165209.31 00:17:27.761 clat (msec): min=123, max=7808, avg=2640.96, stdev=2647.39 00:17:27.761 lat (msec): min=155, max=7841, avg=2667.48, stdev=2660.09 00:17:27.761 clat percentiles (msec): 00:17:27.761 | 1.00th=[ 194], 5.00th=[ 489], 10.00th=[ 676], 20.00th=[ 776], 00:17:27.761 | 30.00th=[ 835], 40.00th=[ 911], 50.00th=[ 995], 60.00th=[ 1250], 00:17:27.761 | 70.00th=[ 2869], 80.00th=[ 6342], 90.00th=[ 7416], 95.00th=[ 7617], 00:17:27.761 | 99.00th=[ 7819], 99.50th=[ 7819], 99.90th=[ 7819], 99.95th=[ 7819], 00:17:27.761 | 99.99th=[ 7819] 00:17:27.761 bw ( KiB/s): min= 8192, max=217088, per=2.56%, avg=85296.50, stdev=73743.25, samples=6 00:17:27.761 iops : min= 8, max= 212, avg=83.17, stdev=71.96, samples=6 00:17:27.761 lat (msec) : 250=1.33%, 500=3.71%, 750=11.14%, 1000=33.95%, 2000=10.61% 00:17:27.761 lat (msec) : >=2000=39.26% 00:17:27.761 cpu : usr=0.01%, sys=1.35%, ctx=997, majf=0, minf=32769 00:17:27.761 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.5%, >=64=83.3% 00:17:27.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.761 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:27.761 issued rwts: total=377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.761 job4: (groupid=0, jobs=1): err= 0: pid=4177814: Wed Nov 27 12:55:51 2024 00:17:27.761 read: IOPS=42, BW=42.4MiB/s (44.4MB/s)(428MiB/10101msec) 00:17:27.761 slat (usec): min=112, max=2009.6k, avg=23394.59, stdev=125158.64 00:17:27.761 clat (msec): min=83, max=8099, avg=2318.37, stdev=1337.63 00:17:27.761 lat (msec): min=100, max=8138, avg=2341.76, stdev=1356.56 00:17:27.761 clat percentiles (msec): 00:17:27.761 | 1.00th=[ 201], 5.00th=[ 684], 10.00th=[ 1053], 20.00th=[ 1099], 00:17:27.761 | 30.00th=[ 1133], 40.00th=[ 1267], 50.00th=[ 1787], 60.00th=[ 3272], 00:17:27.761 | 70.00th=[ 3440], 80.00th=[ 3675], 90.00th=[ 4010], 95.00th=[ 4245], 00:17:27.761 | 99.00th=[ 4396], 99.50th=[ 4463], 99.90th=[ 8087], 99.95th=[ 8087], 00:17:27.761 | 99.99th=[ 8087] 00:17:27.761 bw ( KiB/s): min=16384, max=129024, per=1.84%, avg=61453.50, stdev=38776.42, samples=10 00:17:27.761 iops : min= 16, max= 126, avg=60.00, stdev=37.87, samples=10 00:17:27.761 lat (msec) : 100=0.23%, 250=1.17%, 500=1.40%, 750=3.04%, 1000=2.34% 00:17:27.761 lat (msec) : 2000=44.39%, >=2000=47.43% 00:17:27.761 cpu : usr=0.05%, sys=1.40%, ctx=1308, majf=0, minf=32769 00:17:27.761 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.5%, >=64=85.3% 00:17:27.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.761 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:27.761 issued rwts: total=428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.761 job4: (groupid=0, jobs=1): err= 0: pid=4177815: Wed Nov 27 12:55:51 2024 00:17:27.761 read: IOPS=38, BW=38.8MiB/s (40.7MB/s)(392MiB/10097msec) 00:17:27.762 slat (usec): min=97, max=2055.0k, avg=25533.07, stdev=151048.77 00:17:27.762 clat (msec): min=85, max=6645, avg=1631.05, stdev=1370.40 00:17:27.762 lat (msec): min=102, max=6671, avg=1656.58, stdev=1391.87 00:17:27.762 clat percentiles (msec): 00:17:27.762 | 1.00th=[ 121], 5.00th=[ 414], 10.00th=[ 768], 20.00th=[ 844], 00:17:27.762 | 30.00th=[ 885], 40.00th=[ 927], 50.00th=[ 1099], 60.00th=[ 1385], 00:17:27.762 | 70.00th=[ 1921], 80.00th=[ 2232], 90.00th=[ 2467], 95.00th=[ 6208], 00:17:27.762 | 99.00th=[ 6611], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678], 00:17:27.762 | 99.99th=[ 6678] 00:17:27.762 bw ( KiB/s): min=24576, max=174080, per=2.32%, avg=77265.14, stdev=51382.25, samples=7 00:17:27.762 iops : min= 24, max= 170, avg=75.43, stdev=50.17, samples=7 00:17:27.762 lat (msec) : 100=0.26%, 250=2.30%, 500=4.34%, 750=3.06%, 1000=34.69% 00:17:27.762 lat (msec) : 2000=26.79%, >=2000=28.57% 00:17:27.762 cpu : usr=0.02%, sys=1.30%, ctx=1203, majf=0, minf=32769 00:17:27.762 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.2%, >=64=83.9% 00:17:27.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.762 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:27.762 issued rwts: total=392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.762 job4: (groupid=0, jobs=1): err= 0: pid=4177816: Wed Nov 27 12:55:51 2024 00:17:27.762 read: IOPS=87, BW=87.3MiB/s (91.5MB/s)(883MiB/10114msec) 00:17:27.762 slat (usec): min=43, max=2046.5k, avg=11323.20, stdev=73627.47 00:17:27.762 clat (msec): min=109, max=4345, avg=1172.01, stdev=935.75 00:17:27.762 lat (msec): min=118, max=4351, avg=1183.33, stdev=942.07 00:17:27.762 clat percentiles (msec): 00:17:27.762 | 1.00th=[ 144], 5.00th=[ 249], 10.00th=[ 388], 20.00th=[ 676], 00:17:27.762 | 30.00th=[ 709], 40.00th=[ 760], 50.00th=[ 902], 60.00th=[ 1150], 00:17:27.762 | 70.00th=[ 1250], 80.00th=[ 1351], 90.00th=[ 1854], 95.00th=[ 4144], 00:17:27.762 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 4329], 99.95th=[ 4329], 00:17:27.762 | 99.99th=[ 4329] 00:17:27.762 bw ( KiB/s): min=47104, max=294912, per=3.87%, avg=129013.42, stdev=70740.20, samples=12 00:17:27.762 iops : min= 46, max= 288, avg=125.92, stdev=69.16, samples=12 00:17:27.762 lat (msec) : 250=5.10%, 500=11.33%, 750=21.97%, 1000=15.74%, 2000=38.17% 00:17:27.762 lat (msec) : >=2000=7.70% 00:17:27.762 cpu : usr=0.05%, sys=2.19%, ctx=1512, majf=0, minf=32769 00:17:27.762 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:17:27.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.762 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.762 issued rwts: total=883,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.762 job4: (groupid=0, jobs=1): err= 0: pid=4177817: Wed Nov 27 12:55:51 2024 00:17:27.762 read: IOPS=23, BW=23.8MiB/s (25.0MB/s)(240MiB/10083msec) 00:17:27.762 slat (usec): min=701, max=2098.6k, avg=41695.85, stdev=213143.87 00:17:27.762 clat (msec): min=74, max=8737, avg=4660.06, stdev=3243.46 00:17:27.762 lat (msec): min=85, max=8747, avg=4701.76, stdev=3245.27 00:17:27.762 clat percentiles (msec): 00:17:27.762 | 1.00th=[ 88], 5.00th=[ 232], 10.00th=[ 414], 20.00th=[ 919], 00:17:27.762 | 30.00th=[ 1687], 40.00th=[ 3910], 50.00th=[ 4329], 60.00th=[ 6544], 00:17:27.762 | 70.00th=[ 7684], 80.00th=[ 8288], 90.00th=[ 8557], 95.00th=[ 8658], 00:17:27.762 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:17:27.762 | 99.99th=[ 8792] 00:17:27.762 bw ( KiB/s): min=14336, max=47104, per=0.84%, avg=27979.00, stdev=12055.52, samples=6 00:17:27.762 iops : min= 14, max= 46, avg=27.17, stdev=11.74, samples=6 00:17:27.762 lat (msec) : 100=2.50%, 250=2.50%, 500=6.25%, 750=5.83%, 1000=3.75% 00:17:27.762 lat (msec) : 2000=13.75%, >=2000=65.42% 00:17:27.762 cpu : usr=0.01%, sys=1.31%, ctx=1039, majf=0, minf=32769 00:17:27.762 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.3%, 16=6.7%, 32=13.3%, >=64=73.8% 00:17:27.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.762 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:17:27.762 issued rwts: total=240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.762 job4: (groupid=0, jobs=1): err= 0: pid=4177818: Wed Nov 27 12:55:51 2024 00:17:27.762 read: IOPS=44, BW=44.5MiB/s (46.7MB/s)(451MiB/10137msec) 00:17:27.762 slat (usec): min=31, max=2060.0k, avg=22248.12, stdev=148588.35 00:17:27.762 clat (msec): min=99, max=6291, avg=1547.01, stdev=1241.93 00:17:27.762 lat (msec): min=185, max=6303, avg=1569.26, stdev=1260.49 00:17:27.762 clat percentiles (msec): 00:17:27.762 | 1.00th=[ 266], 5.00th=[ 659], 10.00th=[ 768], 20.00th=[ 802], 00:17:27.762 | 30.00th=[ 818], 40.00th=[ 885], 50.00th=[ 1133], 60.00th=[ 1435], 00:17:27.762 | 70.00th=[ 1787], 80.00th=[ 1938], 90.00th=[ 2433], 95.00th=[ 5000], 00:17:27.762 | 99.00th=[ 6275], 99.50th=[ 6275], 99.90th=[ 6275], 99.95th=[ 6275], 00:17:27.762 | 99.99th=[ 6275] 00:17:27.762 bw ( KiB/s): min=38912, max=151552, per=2.48%, avg=82672.12, stdev=43161.26, samples=8 00:17:27.762 iops : min= 38, max= 148, avg=80.62, stdev=42.21, samples=8 00:17:27.762 lat (msec) : 100=0.22%, 250=0.44%, 500=2.88%, 750=2.66%, 1000=41.46% 00:17:27.762 lat (msec) : 2000=33.70%, >=2000=18.63% 00:17:27.762 cpu : usr=0.07%, sys=1.59%, ctx=1056, majf=0, minf=32769 00:17:27.762 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.1%, >=64=86.0% 00:17:27.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.762 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:27.762 issued rwts: total=451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.762 job4: (groupid=0, jobs=1): err= 0: pid=4177819: Wed Nov 27 12:55:51 2024 00:17:27.762 read: IOPS=24, BW=24.3MiB/s (25.5MB/s)(247MiB/10158msec) 00:17:27.762 slat (usec): min=634, max=2082.8k, avg=40754.08, stdev=223776.42 00:17:27.762 clat (msec): min=89, max=8546, avg=4030.01, stdev=3463.56 00:17:27.762 lat (msec): min=172, max=8551, avg=4070.77, stdev=3471.21 00:17:27.762 clat percentiles (msec): 00:17:27.762 | 1.00th=[ 197], 5.00th=[ 309], 10.00th=[ 489], 20.00th=[ 877], 00:17:27.762 | 30.00th=[ 1150], 40.00th=[ 1536], 50.00th=[ 1972], 60.00th=[ 4077], 00:17:27.762 | 70.00th=[ 8221], 80.00th=[ 8288], 90.00th=[ 8356], 95.00th=[ 8490], 00:17:27.762 | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:17:27.762 | 99.99th=[ 8557] 00:17:27.762 bw ( KiB/s): min=36864, max=77824, per=1.83%, avg=60899.25, stdev=18201.06, samples=4 00:17:27.762 iops : min= 36, max= 76, avg=59.25, stdev=17.84, samples=4 00:17:27.762 lat (msec) : 100=0.40%, 250=2.83%, 500=7.69%, 750=5.67%, 1000=8.91% 00:17:27.762 lat (msec) : 2000=26.32%, >=2000=48.18% 00:17:27.762 cpu : usr=0.02%, sys=1.24%, ctx=931, majf=0, minf=32769 00:17:27.762 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.5%, 32=13.0%, >=64=74.5% 00:17:27.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.762 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:17:27.762 issued rwts: total=247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.762 job4: (groupid=0, jobs=1): err= 0: pid=4177820: Wed Nov 27 12:55:51 2024 00:17:27.762 read: IOPS=21, BW=21.6MiB/s (22.6MB/s)(218MiB/10100msec) 00:17:27.762 slat (usec): min=736, max=2116.6k, avg=45875.59, stdev=255084.06 00:17:27.762 clat (msec): min=97, max=9079, avg=2664.66, stdev=3143.52 00:17:27.762 lat (msec): min=177, max=9088, avg=2710.54, stdev=3169.78 00:17:27.762 clat percentiles (msec): 00:17:27.762 | 1.00th=[ 257], 5.00th=[ 380], 10.00th=[ 558], 20.00th=[ 718], 00:17:27.762 | 30.00th=[ 852], 40.00th=[ 1070], 50.00th=[ 1301], 60.00th=[ 1385], 00:17:27.762 | 70.00th=[ 1485], 80.00th=[ 7886], 90.00th=[ 9060], 95.00th=[ 9060], 00:17:27.762 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:17:27.762 | 99.99th=[ 9060] 00:17:27.762 bw ( KiB/s): min= 4096, max=126976, per=1.86%, avg=62122.67, stdev=61723.79, samples=3 00:17:27.762 iops : min= 4, max= 124, avg=60.67, stdev=60.28, samples=3 00:17:27.762 lat (msec) : 100=0.46%, 250=0.46%, 500=6.88%, 750=14.68%, 1000=15.60% 00:17:27.762 lat (msec) : 2000=38.99%, >=2000=22.94% 00:17:27.762 cpu : usr=0.00%, sys=1.15%, ctx=691, majf=0, minf=32769 00:17:27.762 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.7%, 16=7.3%, 32=14.7%, >=64=71.1% 00:17:27.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.762 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:17:27.762 issued rwts: total=218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.762 job4: (groupid=0, jobs=1): err= 0: pid=4177821: Wed Nov 27 12:55:51 2024 00:17:27.762 read: IOPS=16, BW=16.3MiB/s (17.1MB/s)(164MiB/10080msec) 00:17:27.762 slat (usec): min=1787, max=2119.8k, avg=60982.16, stdev=290017.29 00:17:27.762 clat (msec): min=77, max=9341, avg=1718.84, stdev=1997.22 00:17:27.762 lat (msec): min=113, max=9354, avg=1779.83, stdev=2082.51 00:17:27.762 clat percentiles (msec): 00:17:27.762 | 1.00th=[ 114], 5.00th=[ 292], 10.00th=[ 443], 20.00th=[ 684], 00:17:27.762 | 30.00th=[ 835], 40.00th=[ 986], 50.00th=[ 1200], 60.00th=[ 1552], 00:17:27.762 | 70.00th=[ 1720], 80.00th=[ 1804], 90.00th=[ 1921], 95.00th=[ 8154], 00:17:27.762 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:17:27.762 | 99.99th=[ 9329] 00:17:27.762 bw ( KiB/s): min=30720, max=30720, per=0.92%, avg=30720.00, stdev= 0.00, samples=1 00:17:27.762 iops : min= 30, max= 30, avg=30.00, stdev= 0.00, samples=1 00:17:27.762 lat (msec) : 100=0.61%, 250=2.44%, 500=8.54%, 750=12.20%, 1000=17.68% 00:17:27.762 lat (msec) : 2000=49.39%, >=2000=9.15% 00:17:27.762 cpu : usr=0.00%, sys=0.79%, ctx=694, majf=0, minf=32769 00:17:27.762 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.9%, 16=9.8%, 32=19.5%, >=64=61.6% 00:17:27.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.762 complete : 0=0.0%, 4=97.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.6% 00:17:27.762 issued rwts: total=164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.762 job4: (groupid=0, jobs=1): err= 0: pid=4177822: Wed Nov 27 12:55:51 2024 00:17:27.762 read: IOPS=20, BW=20.9MiB/s (21.9MB/s)(211MiB/10088msec) 00:17:27.762 slat (usec): min=674, max=2084.7k, avg=47397.57, stdev=242139.57 00:17:27.762 clat (msec): min=85, max=10066, avg=4896.74, stdev=3784.93 00:17:27.762 lat (msec): min=91, max=10068, avg=4944.14, stdev=3785.94 00:17:27.762 clat percentiles (msec): 00:17:27.762 | 1.00th=[ 105], 5.00th=[ 266], 10.00th=[ 447], 20.00th=[ 785], 00:17:27.762 | 30.00th=[ 1401], 40.00th=[ 1921], 50.00th=[ 4463], 60.00th=[ 8356], 00:17:27.762 | 70.00th=[ 8658], 80.00th=[ 8792], 90.00th=[ 8792], 95.00th=[ 8792], 00:17:27.762 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:17:27.762 | 99.99th=[10000] 00:17:27.762 bw ( KiB/s): min=28672, max=59273, per=1.29%, avg=42978.25, stdev=14629.13, samples=4 00:17:27.762 iops : min= 28, max= 57, avg=41.75, stdev=13.96, samples=4 00:17:27.762 lat (msec) : 100=0.95%, 250=3.79%, 500=6.64%, 750=8.06%, 1000=5.69% 00:17:27.762 lat (msec) : 2000=16.59%, >=2000=58.29% 00:17:27.762 cpu : usr=0.00%, sys=1.13%, ctx=884, majf=0, minf=32769 00:17:27.762 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.8%, 16=7.6%, 32=15.2%, >=64=70.1% 00:17:27.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.762 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:17:27.762 issued rwts: total=211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.762 job4: (groupid=0, jobs=1): err= 0: pid=4177823: Wed Nov 27 12:55:51 2024 00:17:27.763 read: IOPS=18, BW=18.3MiB/s (19.2MB/s)(185MiB/10110msec) 00:17:27.763 slat (usec): min=542, max=2164.1k, avg=54146.60, stdev=264421.91 00:17:27.763 clat (msec): min=91, max=9011, avg=3661.72, stdev=3557.48 00:17:27.763 lat (msec): min=161, max=9035, avg=3715.87, stdev=3577.70 00:17:27.763 clat percentiles (msec): 00:17:27.763 | 1.00th=[ 161], 5.00th=[ 257], 10.00th=[ 384], 20.00th=[ 709], 00:17:27.763 | 30.00th=[ 1116], 40.00th=[ 1418], 50.00th=[ 1804], 60.00th=[ 2056], 00:17:27.763 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[ 8792], 95.00th=[ 8926], 00:17:27.763 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:17:27.763 | 99.99th=[ 9060] 00:17:27.763 bw ( KiB/s): min= 8192, max=57344, per=1.17%, avg=38912.00, stdev=26781.08, samples=3 00:17:27.763 iops : min= 8, max= 56, avg=38.00, stdev=26.15, samples=3 00:17:27.763 lat (msec) : 100=0.54%, 250=4.32%, 500=8.11%, 750=8.11%, 1000=6.49% 00:17:27.763 lat (msec) : 2000=29.73%, >=2000=42.70% 00:17:27.763 cpu : usr=0.01%, sys=0.97%, ctx=904, majf=0, minf=32769 00:17:27.763 IO depths : 1=0.5%, 2=1.1%, 4=2.2%, 8=4.3%, 16=8.6%, 32=17.3%, >=64=65.9% 00:17:27.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.763 complete : 0=0.0%, 4=98.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.7% 00:17:27.763 issued rwts: total=185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.763 job4: (groupid=0, jobs=1): err= 0: pid=4177824: Wed Nov 27 12:55:51 2024 00:17:27.763 read: IOPS=22, BW=22.7MiB/s (23.8MB/s)(230MiB/10142msec) 00:17:27.763 slat (usec): min=484, max=2077.5k, avg=43684.19, stdev=246281.77 00:17:27.763 clat (msec): min=93, max=8716, avg=2008.36, stdev=2191.50 00:17:27.763 lat (msec): min=151, max=8719, avg=2052.05, stdev=2231.59 00:17:27.763 clat percentiles (msec): 00:17:27.763 | 1.00th=[ 226], 5.00th=[ 617], 10.00th=[ 802], 20.00th=[ 969], 00:17:27.763 | 30.00th=[ 1053], 40.00th=[ 1116], 50.00th=[ 1217], 60.00th=[ 1284], 00:17:27.763 | 70.00th=[ 1418], 80.00th=[ 1586], 90.00th=[ 5336], 95.00th=[ 8658], 00:17:27.763 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:17:27.763 | 99.99th=[ 8658] 00:17:27.763 bw ( KiB/s): min=26624, max=104448, per=2.09%, avg=69632.00, stdev=39553.45, samples=3 00:17:27.763 iops : min= 26, max= 102, avg=68.00, stdev=38.63, samples=3 00:17:27.763 lat (msec) : 100=0.43%, 250=0.87%, 500=2.61%, 750=4.35%, 1000=16.52% 00:17:27.763 lat (msec) : 2000=59.57%, >=2000=15.65% 00:17:27.763 cpu : usr=0.02%, sys=1.10%, ctx=574, majf=0, minf=32769 00:17:27.763 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.5%, 16=7.0%, 32=13.9%, >=64=72.6% 00:17:27.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.763 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:17:27.763 issued rwts: total=230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.763 job4: (groupid=0, jobs=1): err= 0: pid=4177825: Wed Nov 27 12:55:51 2024 00:17:27.763 read: IOPS=128, BW=128MiB/s (134MB/s)(1297MiB/10129msec) 00:17:27.763 slat (usec): min=43, max=2112.3k, avg=7704.89, stdev=81543.65 00:17:27.763 clat (msec): min=126, max=4831, avg=959.27, stdev=1226.32 00:17:27.763 lat (msec): min=131, max=4837, avg=966.97, stdev=1230.88 00:17:27.763 clat percentiles (msec): 00:17:27.763 | 1.00th=[ 190], 5.00th=[ 393], 10.00th=[ 518], 20.00th=[ 523], 00:17:27.763 | 30.00th=[ 523], 40.00th=[ 527], 50.00th=[ 535], 60.00th=[ 542], 00:17:27.763 | 70.00th=[ 600], 80.00th=[ 651], 90.00th=[ 2668], 95.00th=[ 4799], 00:17:27.763 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4866], 00:17:27.763 | 99.99th=[ 4866] 00:17:27.763 bw ( KiB/s): min=30720, max=249856, per=5.53%, avg=184320.00, stdev=88202.81, samples=13 00:17:27.763 iops : min= 30, max= 244, avg=180.00, stdev=86.14, samples=13 00:17:27.763 lat (msec) : 250=2.16%, 500=4.78%, 750=81.42%, 1000=0.69%, >=2000=10.95% 00:17:27.763 cpu : usr=0.10%, sys=2.42%, ctx=1173, majf=0, minf=32769 00:17:27.763 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:17:27.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.763 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.763 issued rwts: total=1297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.763 job5: (groupid=0, jobs=1): err= 0: pid=4177826: Wed Nov 27 12:55:51 2024 00:17:27.763 read: IOPS=57, BW=57.5MiB/s (60.3MB/s)(581MiB/10108msec) 00:17:27.763 slat (usec): min=804, max=2073.3k, avg=17298.24, stdev=130868.81 00:17:27.763 clat (msec): min=53, max=6132, avg=1187.89, stdev=1169.28 00:17:27.763 lat (msec): min=110, max=6136, avg=1205.19, stdev=1187.02 00:17:27.763 clat percentiles (msec): 00:17:27.763 | 1.00th=[ 126], 5.00th=[ 477], 10.00th=[ 527], 20.00th=[ 542], 00:17:27.763 | 30.00th=[ 558], 40.00th=[ 558], 50.00th=[ 584], 60.00th=[ 911], 00:17:27.763 | 70.00th=[ 1368], 80.00th=[ 1854], 90.00th=[ 2089], 95.00th=[ 2735], 00:17:27.763 | 99.00th=[ 6141], 99.50th=[ 6141], 99.90th=[ 6141], 99.95th=[ 6141], 00:17:27.763 | 99.99th=[ 6141] 00:17:27.763 bw ( KiB/s): min=36864, max=233472, per=3.62%, avg=120832.00, stdev=85453.40, samples=7 00:17:27.763 iops : min= 36, max= 228, avg=118.00, stdev=83.45, samples=7 00:17:27.763 lat (msec) : 100=0.17%, 250=1.55%, 500=3.61%, 750=52.15%, 1000=5.68% 00:17:27.763 lat (msec) : 2000=21.17%, >=2000=15.66% 00:17:27.763 cpu : usr=0.05%, sys=1.18%, ctx=1378, majf=0, minf=32769 00:17:27.763 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.2% 00:17:27.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.763 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:27.763 issued rwts: total=581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.763 job5: (groupid=0, jobs=1): err= 0: pid=4177827: Wed Nov 27 12:55:51 2024 00:17:27.763 read: IOPS=28, BW=28.7MiB/s (30.1MB/s)(292MiB/10161msec) 00:17:27.763 slat (usec): min=191, max=2053.4k, avg=34484.34, stdev=214112.03 00:17:27.763 clat (msec): min=89, max=8962, avg=2388.75, stdev=2835.70 00:17:27.763 lat (msec): min=165, max=8966, avg=2423.23, stdev=2858.76 00:17:27.763 clat percentiles (msec): 00:17:27.763 | 1.00th=[ 169], 5.00th=[ 232], 10.00th=[ 338], 20.00th=[ 609], 00:17:27.763 | 30.00th=[ 860], 40.00th=[ 919], 50.00th=[ 1036], 60.00th=[ 1267], 00:17:27.763 | 70.00th=[ 1401], 80.00th=[ 3540], 90.00th=[ 8658], 95.00th=[ 8792], 00:17:27.763 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:17:27.763 | 99.99th=[ 8926] 00:17:27.763 bw ( KiB/s): min=44966, max=167936, per=3.36%, avg=111927.33, stdev=62212.35, samples=3 00:17:27.763 iops : min= 43, max= 164, avg=109.00, stdev=61.25, samples=3 00:17:27.763 lat (msec) : 100=0.34%, 250=5.82%, 500=10.62%, 750=9.25%, 1000=20.89% 00:17:27.763 lat (msec) : 2000=27.05%, >=2000=26.03% 00:17:27.763 cpu : usr=0.03%, sys=1.07%, ctx=629, majf=0, minf=32769 00:17:27.763 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.5%, 32=11.0%, >=64=78.4% 00:17:27.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.763 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:27.763 issued rwts: total=292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.763 job5: (groupid=0, jobs=1): err= 0: pid=4177828: Wed Nov 27 12:55:51 2024 00:17:27.763 read: IOPS=28, BW=28.8MiB/s (30.2MB/s)(345MiB/11983msec) 00:17:27.763 slat (usec): min=84, max=2132.2k, avg=29086.61, stdev=202418.82 00:17:27.763 clat (msec): min=516, max=8786, avg=2231.93, stdev=2163.65 00:17:27.763 lat (msec): min=520, max=8890, avg=2261.01, stdev=2191.57 00:17:27.763 clat percentiles (msec): 00:17:27.763 | 1.00th=[ 523], 5.00th=[ 527], 10.00th=[ 531], 20.00th=[ 709], 00:17:27.763 | 30.00th=[ 995], 40.00th=[ 1200], 50.00th=[ 1485], 60.00th=[ 2198], 00:17:27.763 | 70.00th=[ 2366], 80.00th=[ 2500], 90.00th=[ 5604], 95.00th=[ 8658], 00:17:27.763 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:17:27.763 | 99.99th=[ 8792] 00:17:27.763 bw ( KiB/s): min=57344, max=190464, per=4.04%, avg=134743.33, stdev=69157.12, samples=3 00:17:27.763 iops : min= 56, max= 186, avg=131.33, stdev=67.42, samples=3 00:17:27.763 lat (msec) : 750=21.45%, 1000=8.70%, 2000=20.87%, >=2000=48.99% 00:17:27.763 cpu : usr=0.02%, sys=1.12%, ctx=520, majf=0, minf=32769 00:17:27.763 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.6%, 32=9.3%, >=64=81.7% 00:17:27.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.763 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:17:27.763 issued rwts: total=345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.763 job5: (groupid=0, jobs=1): err= 0: pid=4177829: Wed Nov 27 12:55:51 2024 00:17:27.763 read: IOPS=68, BW=68.8MiB/s (72.1MB/s)(692MiB/10060msec) 00:17:27.763 slat (usec): min=99, max=2114.4k, avg=14457.93, stdev=120874.88 00:17:27.763 clat (msec): min=49, max=6277, avg=852.38, stdev=773.89 00:17:27.763 lat (msec): min=83, max=6287, avg=866.84, stdev=801.36 00:17:27.763 clat percentiles (msec): 00:17:27.763 | 1.00th=[ 104], 5.00th=[ 351], 10.00th=[ 430], 20.00th=[ 464], 00:17:27.763 | 30.00th=[ 493], 40.00th=[ 510], 50.00th=[ 531], 60.00th=[ 676], 00:17:27.763 | 70.00th=[ 1045], 80.00th=[ 1267], 90.00th=[ 1401], 95.00th=[ 1536], 00:17:27.763 | 99.00th=[ 5067], 99.50th=[ 6208], 99.90th=[ 6275], 99.95th=[ 6275], 00:17:27.763 | 99.99th=[ 6275] 00:17:27.763 bw ( KiB/s): min=57344, max=296960, per=4.54%, avg=151224.29, stdev=86187.30, samples=7 00:17:27.763 iops : min= 56, max= 290, avg=147.57, stdev=84.21, samples=7 00:17:27.763 lat (msec) : 50=0.14%, 100=0.58%, 250=2.31%, 500=30.64%, 750=29.05% 00:17:27.763 lat (msec) : 1000=6.21%, 2000=28.90%, >=2000=2.17% 00:17:27.763 cpu : usr=0.05%, sys=1.43%, ctx=1407, majf=0, minf=32769 00:17:27.763 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:17:27.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.763 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:27.763 issued rwts: total=692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.763 job5: (groupid=0, jobs=1): err= 0: pid=4177830: Wed Nov 27 12:55:51 2024 00:17:27.763 read: IOPS=35, BW=35.2MiB/s (36.9MB/s)(355MiB/10092msec) 00:17:27.763 slat (usec): min=499, max=2082.7k, avg=28170.19, stdev=196215.52 00:17:27.763 clat (msec): min=88, max=8259, avg=1729.78, stdev=2149.09 00:17:27.763 lat (msec): min=95, max=8269, avg=1757.96, stdev=2176.00 00:17:27.763 clat percentiles (msec): 00:17:27.763 | 1.00th=[ 111], 5.00th=[ 279], 10.00th=[ 464], 20.00th=[ 793], 00:17:27.763 | 30.00th=[ 818], 40.00th=[ 835], 50.00th=[ 885], 60.00th=[ 936], 00:17:27.763 | 70.00th=[ 1011], 80.00th=[ 1133], 90.00th=[ 5067], 95.00th=[ 7080], 00:17:27.763 | 99.00th=[ 8221], 99.50th=[ 8288], 99.90th=[ 8288], 99.95th=[ 8288], 00:17:27.763 | 99.99th=[ 8288] 00:17:27.763 bw ( KiB/s): min=73728, max=165888, per=3.50%, avg=116736.00, stdev=39323.02, samples=4 00:17:27.763 iops : min= 72, max= 162, avg=114.00, stdev=38.40, samples=4 00:17:27.763 lat (msec) : 100=0.85%, 250=3.10%, 500=6.76%, 750=7.61%, 1000=49.01% 00:17:27.763 lat (msec) : 2000=14.08%, >=2000=18.59% 00:17:27.763 cpu : usr=0.00%, sys=1.21%, ctx=616, majf=0, minf=32769 00:17:27.763 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.5%, 32=9.0%, >=64=82.3% 00:17:27.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.763 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:27.763 issued rwts: total=355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.763 job5: (groupid=0, jobs=1): err= 0: pid=4177831: Wed Nov 27 12:55:51 2024 00:17:27.764 read: IOPS=27, BW=27.9MiB/s (29.2MB/s)(281MiB/10081msec) 00:17:27.764 slat (usec): min=35, max=2076.3k, avg=35649.13, stdev=212851.29 00:17:27.764 clat (msec): min=62, max=7830, avg=2351.24, stdev=1690.99 00:17:27.764 lat (msec): min=81, max=7838, avg=2386.89, stdev=1720.42 00:17:27.764 clat percentiles (msec): 00:17:27.764 | 1.00th=[ 104], 5.00th=[ 259], 10.00th=[ 489], 20.00th=[ 969], 00:17:27.764 | 30.00th=[ 1670], 40.00th=[ 1854], 50.00th=[ 1955], 60.00th=[ 2467], 00:17:27.764 | 70.00th=[ 2668], 80.00th=[ 3205], 90.00th=[ 4597], 95.00th=[ 6678], 00:17:27.764 | 99.00th=[ 7819], 99.50th=[ 7819], 99.90th=[ 7819], 99.95th=[ 7819], 00:17:27.764 | 99.99th=[ 7819] 00:17:27.764 bw ( KiB/s): min= 6144, max=106496, per=1.49%, avg=49561.60, stdev=36732.92, samples=5 00:17:27.764 iops : min= 6, max= 104, avg=48.40, stdev=35.87, samples=5 00:17:27.764 lat (msec) : 100=0.71%, 250=3.91%, 500=6.05%, 750=5.34%, 1000=4.27% 00:17:27.764 lat (msec) : 2000=33.10%, >=2000=46.62% 00:17:27.764 cpu : usr=0.01%, sys=1.00%, ctx=709, majf=0, minf=32769 00:17:27.764 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.7%, 32=11.4%, >=64=77.6% 00:17:27.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.764 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:27.764 issued rwts: total=281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.764 job5: (groupid=0, jobs=1): err= 0: pid=4177832: Wed Nov 27 12:55:51 2024 00:17:27.764 read: IOPS=127, BW=127MiB/s (133MB/s)(1287MiB/10122msec) 00:17:27.764 slat (usec): min=39, max=2091.9k, avg=7767.51, stdev=105842.21 00:17:27.764 clat (msec): min=117, max=6235, avg=548.17, stdev=1067.26 00:17:27.764 lat (msec): min=118, max=6238, avg=555.94, stdev=1080.23 00:17:27.764 clat percentiles (msec): 00:17:27.764 | 1.00th=[ 118], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 120], 00:17:27.764 | 30.00th=[ 121], 40.00th=[ 122], 50.00th=[ 184], 60.00th=[ 203], 00:17:27.764 | 70.00th=[ 226], 80.00th=[ 359], 90.00th=[ 2366], 95.00th=[ 2433], 00:17:27.764 | 99.00th=[ 6208], 99.50th=[ 6208], 99.90th=[ 6208], 99.95th=[ 6208], 00:17:27.764 | 99.99th=[ 6208] 00:17:27.764 bw ( KiB/s): min=18432, max=929792, per=14.24%, avg=474903.80, stdev=359363.17, samples=5 00:17:27.764 iops : min= 18, max= 908, avg=463.60, stdev=350.88, samples=5 00:17:27.764 lat (msec) : 250=75.68%, 500=7.46%, 750=4.43%, >=2000=12.43% 00:17:27.764 cpu : usr=0.05%, sys=1.75%, ctx=1813, majf=0, minf=32769 00:17:27.764 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:17:27.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.764 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.764 issued rwts: total=1287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.764 job5: (groupid=0, jobs=1): err= 0: pid=4177833: Wed Nov 27 12:55:51 2024 00:17:27.764 read: IOPS=64, BW=64.8MiB/s (67.9MB/s)(652MiB/10066msec) 00:17:27.764 slat (usec): min=43, max=2091.0k, avg=15333.85, stdev=123229.76 00:17:27.764 clat (msec): min=63, max=6208, avg=947.05, stdev=672.92 00:17:27.764 lat (msec): min=88, max=6211, avg=962.39, stdev=703.61 00:17:27.764 clat percentiles (msec): 00:17:27.764 | 1.00th=[ 144], 5.00th=[ 609], 10.00th=[ 676], 20.00th=[ 676], 00:17:27.764 | 30.00th=[ 693], 40.00th=[ 701], 50.00th=[ 726], 60.00th=[ 760], 00:17:27.764 | 70.00th=[ 802], 80.00th=[ 1167], 90.00th=[ 1485], 95.00th=[ 1603], 00:17:27.764 | 99.00th=[ 4866], 99.50th=[ 6074], 99.90th=[ 6208], 99.95th=[ 6208], 00:17:27.764 | 99.99th=[ 6208] 00:17:27.764 bw ( KiB/s): min=67584, max=194560, per=4.36%, avg=145375.86, stdev=50245.79, samples=7 00:17:27.764 iops : min= 66, max= 190, avg=141.86, stdev=49.15, samples=7 00:17:27.764 lat (msec) : 100=0.31%, 250=1.23%, 500=2.45%, 750=53.99%, 1000=18.25% 00:17:27.764 lat (msec) : 2000=19.48%, >=2000=4.29% 00:17:27.764 cpu : usr=0.02%, sys=1.77%, ctx=755, majf=0, minf=32769 00:17:27.764 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:17:27.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.764 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:27.764 issued rwts: total=652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.764 job5: (groupid=0, jobs=1): err= 0: pid=4177834: Wed Nov 27 12:55:51 2024 00:17:27.764 read: IOPS=102, BW=102MiB/s (107MB/s)(1034MiB/10092msec) 00:17:27.764 slat (usec): min=42, max=2053.3k, avg=9665.11, stdev=64414.29 00:17:27.764 clat (msec): min=90, max=3330, avg=1130.28, stdev=829.78 00:17:27.764 lat (msec): min=94, max=3353, avg=1139.95, stdev=833.20 00:17:27.764 clat percentiles (msec): 00:17:27.764 | 1.00th=[ 209], 5.00th=[ 414], 10.00th=[ 506], 20.00th=[ 609], 00:17:27.764 | 30.00th=[ 693], 40.00th=[ 735], 50.00th=[ 818], 60.00th=[ 1011], 00:17:27.764 | 70.00th=[ 1167], 80.00th=[ 1334], 90.00th=[ 3138], 95.00th=[ 3239], 00:17:27.764 | 99.00th=[ 3306], 99.50th=[ 3306], 99.90th=[ 3339], 99.95th=[ 3339], 00:17:27.764 | 99.99th=[ 3339] 00:17:27.764 bw ( KiB/s): min= 2048, max=253952, per=3.98%, avg=132631.71, stdev=59453.07, samples=14 00:17:27.764 iops : min= 2, max= 248, avg=129.36, stdev=58.11, samples=14 00:17:27.764 lat (msec) : 100=0.19%, 250=1.55%, 500=5.90%, 750=34.72%, 1000=17.12% 00:17:27.764 lat (msec) : 2000=28.24%, >=2000=12.28% 00:17:27.764 cpu : usr=0.17%, sys=1.88%, ctx=1970, majf=0, minf=32769 00:17:27.764 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:17:27.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.764 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.764 issued rwts: total=1034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.764 job5: (groupid=0, jobs=1): err= 0: pid=4177835: Wed Nov 27 12:55:51 2024 00:17:27.764 read: IOPS=87, BW=87.3MiB/s (91.5MB/s)(880MiB/10081msec) 00:17:27.764 slat (usec): min=46, max=2077.5k, avg=11370.77, stdev=85210.86 00:17:27.764 clat (msec): min=70, max=4627, avg=1334.79, stdev=1043.98 00:17:27.764 lat (msec): min=135, max=4630, avg=1346.16, stdev=1049.09 00:17:27.764 clat percentiles (msec): 00:17:27.764 | 1.00th=[ 288], 5.00th=[ 584], 10.00th=[ 584], 20.00th=[ 600], 00:17:27.764 | 30.00th=[ 617], 40.00th=[ 735], 50.00th=[ 869], 60.00th=[ 1020], 00:17:27.764 | 70.00th=[ 1200], 80.00th=[ 2500], 90.00th=[ 2903], 95.00th=[ 3037], 00:17:27.764 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:17:27.764 | 99.99th=[ 4597] 00:17:27.764 bw ( KiB/s): min= 2048, max=229376, per=3.56%, avg=118580.85, stdev=79437.67, samples=13 00:17:27.764 iops : min= 2, max= 224, avg=115.69, stdev=77.55, samples=13 00:17:27.764 lat (msec) : 100=0.11%, 250=0.68%, 500=1.14%, 750=39.09%, 1000=18.30% 00:17:27.764 lat (msec) : 2000=15.80%, >=2000=24.89% 00:17:27.764 cpu : usr=0.06%, sys=1.68%, ctx=1515, majf=0, minf=32769 00:17:27.764 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.8% 00:17:27.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.764 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.764 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.764 job5: (groupid=0, jobs=1): err= 0: pid=4177836: Wed Nov 27 12:55:51 2024 00:17:27.764 read: IOPS=64, BW=64.4MiB/s (67.6MB/s)(652MiB/10118msec) 00:17:27.764 slat (usec): min=498, max=2093.3k, avg=15398.43, stdev=123676.33 00:17:27.764 clat (msec): min=73, max=6356, avg=1172.07, stdev=1382.31 00:17:27.764 lat (msec): min=118, max=6359, avg=1187.47, stdev=1396.50 00:17:27.764 clat percentiles (msec): 00:17:27.764 | 1.00th=[ 129], 5.00th=[ 363], 10.00th=[ 502], 20.00th=[ 535], 00:17:27.764 | 30.00th=[ 575], 40.00th=[ 609], 50.00th=[ 625], 60.00th=[ 852], 00:17:27.764 | 70.00th=[ 1183], 80.00th=[ 1334], 90.00th=[ 1586], 95.00th=[ 6342], 00:17:27.764 | 99.00th=[ 6342], 99.50th=[ 6342], 99.90th=[ 6342], 99.95th=[ 6342], 00:17:27.764 | 99.99th=[ 6342] 00:17:27.764 bw ( KiB/s): min=57344, max=260096, per=4.02%, avg=134144.00, stdev=67615.02, samples=8 00:17:27.764 iops : min= 56, max= 254, avg=131.00, stdev=66.03, samples=8 00:17:27.764 lat (msec) : 100=0.15%, 250=2.91%, 500=6.13%, 750=47.39%, 1000=6.90% 00:17:27.764 lat (msec) : 2000=29.75%, >=2000=6.75% 00:17:27.764 cpu : usr=0.02%, sys=1.74%, ctx=1426, majf=0, minf=32769 00:17:27.764 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:17:27.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.764 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:27.764 issued rwts: total=652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.764 job5: (groupid=0, jobs=1): err= 0: pid=4177837: Wed Nov 27 12:55:51 2024 00:17:27.764 read: IOPS=74, BW=74.2MiB/s (77.8MB/s)(749MiB/10093msec) 00:17:27.764 slat (usec): min=66, max=2078.9k, avg=13360.61, stdev=112142.29 00:17:27.764 clat (msec): min=82, max=5993, avg=1325.28, stdev=1665.76 00:17:27.764 lat (msec): min=94, max=5996, avg=1338.65, stdev=1673.81 00:17:27.764 clat percentiles (msec): 00:17:27.764 | 1.00th=[ 220], 5.00th=[ 317], 10.00th=[ 359], 20.00th=[ 456], 00:17:27.764 | 30.00th=[ 481], 40.00th=[ 489], 50.00th=[ 498], 60.00th=[ 506], 00:17:27.764 | 70.00th=[ 1053], 80.00th=[ 2072], 90.00th=[ 4866], 95.00th=[ 5873], 00:17:27.764 | 99.00th=[ 6007], 99.50th=[ 6007], 99.90th=[ 6007], 99.95th=[ 6007], 00:17:27.764 | 99.99th=[ 6007] 00:17:27.764 bw ( KiB/s): min=26624, max=282624, per=4.25%, avg=141532.22, stdev=124445.10, samples=9 00:17:27.764 iops : min= 26, max= 276, avg=138.11, stdev=121.63, samples=9 00:17:27.764 lat (msec) : 100=0.40%, 250=0.67%, 500=50.87%, 750=15.49%, 1000=2.14% 00:17:27.764 lat (msec) : 2000=9.75%, >=2000=20.69% 00:17:27.764 cpu : usr=0.05%, sys=1.61%, ctx=1462, majf=0, minf=32769 00:17:27.764 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:17:27.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.764 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:27.764 issued rwts: total=749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.764 job5: (groupid=0, jobs=1): err= 0: pid=4177838: Wed Nov 27 12:55:51 2024 00:17:27.764 read: IOPS=95, BW=95.1MiB/s (99.8MB/s)(960MiB/10090msec) 00:17:27.764 slat (usec): min=73, max=2088.7k, avg=10410.99, stdev=101137.55 00:17:27.764 clat (msec): min=89, max=6009, avg=648.70, stdev=681.57 00:17:27.764 lat (msec): min=89, max=6061, avg=659.11, stdev=704.30 00:17:27.764 clat percentiles (msec): 00:17:27.764 | 1.00th=[ 186], 5.00th=[ 213], 10.00th=[ 245], 20.00th=[ 271], 00:17:27.764 | 30.00th=[ 300], 40.00th=[ 326], 50.00th=[ 422], 60.00th=[ 575], 00:17:27.764 | 70.00th=[ 835], 80.00th=[ 936], 90.00th=[ 1200], 95.00th=[ 1234], 00:17:27.764 | 99.00th=[ 4732], 99.50th=[ 6007], 99.90th=[ 6007], 99.95th=[ 6007], 00:17:27.764 | 99.99th=[ 6007] 00:17:27.764 bw ( KiB/s): min=88064, max=439441, per=6.39%, avg=213138.12, stdev=137758.28, samples=8 00:17:27.764 iops : min= 86, max= 429, avg=208.12, stdev=134.50, samples=8 00:17:27.765 lat (msec) : 100=0.52%, 250=11.15%, 500=45.10%, 750=8.75%, 1000=17.50% 00:17:27.765 lat (msec) : 2000=14.79%, >=2000=2.19% 00:17:27.765 cpu : usr=0.03%, sys=1.59%, ctx=1977, majf=0, minf=32769 00:17:27.765 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.4% 00:17:27.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.765 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.765 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.765 00:17:27.765 Run status group 0 (all jobs): 00:17:27.765 READ: bw=3256MiB/s (3414MB/s), 1829KiB/s-251MiB/s (1873kB/s-264MB/s), io=38.1GiB (41.0GB), run=10054-11996msec 00:17:27.765 00:17:27.765 Disk stats (read/write): 00:17:27.765 nvme0n1: ios=61546/0, merge=0/0, ticks=7104248/0, in_queue=7104248, util=98.43% 00:17:27.765 nvme1n1: ios=36240/0, merge=0/0, ticks=6687537/0, in_queue=6687537, util=98.34% 00:17:27.765 nvme2n1: ios=54434/0, merge=0/0, ticks=6596130/0, in_queue=6596130, util=98.43% 00:17:27.765 nvme3n1: ios=44083/0, merge=0/0, ticks=6683298/0, in_queue=6683298, util=98.47% 00:17:27.765 nvme4n1: ios=41922/0, merge=0/0, ticks=6242365/0, in_queue=6242365, util=98.88% 00:17:27.765 nvme5n1: ios=70068/0, merge=0/0, ticks=8054846/0, in_queue=8054846, util=98.99% 00:17:27.765 12:55:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:17:27.765 12:55:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:17:27.765 12:55:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:27.765 12:55:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:17:27.765 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.765 12:55:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:17:27.765 12:55:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:27.765 12:55:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:27.765 12:55:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:17:27.765 12:55:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:27.765 12:55:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:17:27.765 12:55:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:27.765 12:55:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:27.765 12:55:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.765 12:55:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:27.765 12:55:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.765 12:55:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:27.765 12:55:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:27.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.765 12:55:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:17:27.765 12:55:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:27.765 12:55:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:27.765 12:55:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:17:27.765 12:55:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:27.765 12:55:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:17:27.765 12:55:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:27.765 12:55:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.765 12:55:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.765 12:55:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:28.023 12:55:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.023 12:55:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:28.023 12:55:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:17:28.960 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:17:28.960 12:55:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:17:28.960 12:55:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:28.960 12:55:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:28.960 12:55:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:17:28.960 12:55:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:28.960 12:55:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:17:28.960 12:55:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:28.960 12:55:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:28.960 12:55:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.960 12:55:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:28.960 12:55:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.960 12:55:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:28.960 12:55:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:17:29.898 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:17:29.898 12:55:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:17:29.898 12:55:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:29.898 12:55:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:29.898 12:55:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:17:29.898 12:55:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:29.898 12:55:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:17:29.898 12:55:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:29.898 12:55:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:29.898 12:55:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.898 12:55:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:29.898 12:55:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.898 12:55:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:29.898 12:55:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:17:30.835 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:17:30.835 12:55:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:17:30.835 12:55:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:30.835 12:55:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:30.835 12:55:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:17:30.835 12:55:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:30.835 12:55:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:17:30.835 12:55:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:30.835 12:55:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:17:30.835 12:55:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.835 12:55:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:30.835 12:55:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.835 12:55:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:30.835 12:55:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:17:31.792 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:31.792 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:32.052 rmmod nvme_rdma 00:17:32.052 rmmod nvme_fabrics 00:17:32.052 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:32.052 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:17:32.052 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:17:32.052 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 4176150 ']' 00:17:32.052 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 4176150 00:17:32.052 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 4176150 ']' 00:17:32.052 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 4176150 00:17:32.052 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:17:32.052 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.052 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4176150 00:17:32.052 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:32.052 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:32.052 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4176150' 00:17:32.052 killing process with pid 4176150 00:17:32.052 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 4176150 00:17:32.052 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 4176150 00:17:32.311 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:32.311 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:32.311 00:17:32.311 real 0m36.180s 00:17:32.311 user 1m59.763s 00:17:32.311 sys 0m19.224s 00:17:32.311 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.311 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:32.311 ************************************ 00:17:32.311 END TEST nvmf_srq_overwhelm 00:17:32.311 ************************************ 00:17:32.311 12:55:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:17:32.311 12:55:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:32.311 12:55:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.311 12:55:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:32.571 ************************************ 00:17:32.571 START TEST nvmf_shutdown 00:17:32.571 ************************************ 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:17:32.571 * Looking for test storage... 00:17:32.571 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:17:32.571 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:32.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.572 --rc genhtml_branch_coverage=1 00:17:32.572 --rc genhtml_function_coverage=1 00:17:32.572 --rc genhtml_legend=1 00:17:32.572 --rc geninfo_all_blocks=1 00:17:32.572 --rc geninfo_unexecuted_blocks=1 00:17:32.572 00:17:32.572 ' 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:32.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.572 --rc genhtml_branch_coverage=1 00:17:32.572 --rc genhtml_function_coverage=1 00:17:32.572 --rc genhtml_legend=1 00:17:32.572 --rc geninfo_all_blocks=1 00:17:32.572 --rc geninfo_unexecuted_blocks=1 00:17:32.572 00:17:32.572 ' 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:32.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.572 --rc genhtml_branch_coverage=1 00:17:32.572 --rc genhtml_function_coverage=1 00:17:32.572 --rc genhtml_legend=1 00:17:32.572 --rc geninfo_all_blocks=1 00:17:32.572 --rc geninfo_unexecuted_blocks=1 00:17:32.572 00:17:32.572 ' 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:32.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.572 --rc genhtml_branch_coverage=1 00:17:32.572 --rc genhtml_function_coverage=1 00:17:32.572 --rc genhtml_legend=1 00:17:32.572 --rc geninfo_all_blocks=1 00:17:32.572 --rc geninfo_unexecuted_blocks=1 00:17:32.572 00:17:32.572 ' 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:32.572 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:32.572 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:32.573 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:32.573 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:32.573 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:32.573 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:17:32.573 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:32.573 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.573 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:32.833 ************************************ 00:17:32.833 START TEST nvmf_shutdown_tc1 00:17:32.833 ************************************ 00:17:32.833 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:17:32.833 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:17:32.833 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:17:32.833 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:32.833 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.833 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:32.833 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:32.833 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:32.833 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.833 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.833 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.833 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:32.833 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:32.833 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:17:32.833 12:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:40.963 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:40.964 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:40.964 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:40.964 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:40.964 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:40.964 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:40.965 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:40.965 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:40.965 altname enp217s0f0np0 00:17:40.965 altname ens818f0np0 00:17:40.965 inet 192.168.100.8/24 scope global mlx_0_0 00:17:40.965 valid_lft forever preferred_lft forever 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:40.965 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:40.965 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:40.965 altname enp217s0f1np1 00:17:40.965 altname ens818f1np1 00:17:40.965 inet 192.168.100.9/24 scope global mlx_0_1 00:17:40.965 valid_lft forever preferred_lft forever 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:40.965 192.168.100.9' 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:40.965 192.168.100.9' 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:40.965 192.168.100.9' 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:40.965 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:41.225 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:17:41.225 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:41.226 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:41.226 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:41.226 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=4184893 00:17:41.226 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:41.226 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 4184893 00:17:41.226 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 4184893 ']' 00:17:41.226 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.226 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.226 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.226 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.226 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:41.226 [2024-11-27 12:56:07.411078] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:17:41.226 [2024-11-27 12:56:07.411138] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.226 [2024-11-27 12:56:07.501213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:41.226 [2024-11-27 12:56:07.539669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.226 [2024-11-27 12:56:07.539716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.226 [2024-11-27 12:56:07.539730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.226 [2024-11-27 12:56:07.539738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.226 [2024-11-27 12:56:07.539745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.226 [2024-11-27 12:56:07.541409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.226 [2024-11-27 12:56:07.541510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:41.226 [2024-11-27 12:56:07.541821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.226 [2024-11-27 12:56:07.541821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:41.485 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.485 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:17:41.485 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:41.485 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:41.485 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:41.485 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.485 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:41.485 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.485 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:41.486 [2024-11-27 12:56:07.719285] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe590f0/0xe5d5e0) succeed. 00:17:41.486 [2024-11-27 12:56:07.728416] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe5a780/0xe9ec80) succeed. 00:17:41.486 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.486 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:17:41.486 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:17:41.486 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:41.486 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:41.486 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:41.486 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.486 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:41.745 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.745 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:41.745 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.746 12:56:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:41.746 Malloc1 00:17:41.746 [2024-11-27 12:56:07.965820] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:41.746 Malloc2 00:17:41.746 Malloc3 00:17:41.746 Malloc4 00:17:41.746 Malloc5 00:17:42.005 Malloc6 00:17:42.005 Malloc7 00:17:42.005 Malloc8 00:17:42.005 Malloc9 00:17:42.005 Malloc10 00:17:42.005 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.005 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:17:42.005 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:42.005 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:42.265 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=4185205 00:17:42.265 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 4185205 /var/tmp/bdevperf.sock 00:17:42.265 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 4185205 ']' 00:17:42.265 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.265 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.265 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:17:42.265 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:42.265 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.265 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.265 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:17:42.265 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:42.265 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:17:42.265 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:42.266 { 00:17:42.266 "params": { 00:17:42.266 "name": "Nvme$subsystem", 00:17:42.266 "trtype": "$TEST_TRANSPORT", 00:17:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.266 "adrfam": "ipv4", 00:17:42.266 "trsvcid": "$NVMF_PORT", 00:17:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.266 "hdgst": ${hdgst:-false}, 00:17:42.266 "ddgst": ${ddgst:-false} 00:17:42.266 }, 00:17:42.266 "method": "bdev_nvme_attach_controller" 00:17:42.266 } 00:17:42.266 EOF 00:17:42.266 )") 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:42.266 { 00:17:42.266 "params": { 00:17:42.266 "name": "Nvme$subsystem", 00:17:42.266 "trtype": "$TEST_TRANSPORT", 00:17:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.266 "adrfam": "ipv4", 00:17:42.266 "trsvcid": "$NVMF_PORT", 00:17:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.266 "hdgst": ${hdgst:-false}, 00:17:42.266 "ddgst": ${ddgst:-false} 00:17:42.266 }, 00:17:42.266 "method": "bdev_nvme_attach_controller" 00:17:42.266 } 00:17:42.266 EOF 00:17:42.266 )") 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:42.266 { 00:17:42.266 "params": { 00:17:42.266 "name": "Nvme$subsystem", 00:17:42.266 "trtype": "$TEST_TRANSPORT", 00:17:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.266 "adrfam": "ipv4", 00:17:42.266 "trsvcid": "$NVMF_PORT", 00:17:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.266 "hdgst": ${hdgst:-false}, 00:17:42.266 "ddgst": ${ddgst:-false} 00:17:42.266 }, 00:17:42.266 "method": "bdev_nvme_attach_controller" 00:17:42.266 } 00:17:42.266 EOF 00:17:42.266 )") 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:42.266 { 00:17:42.266 "params": { 00:17:42.266 "name": "Nvme$subsystem", 00:17:42.266 "trtype": "$TEST_TRANSPORT", 00:17:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.266 "adrfam": "ipv4", 00:17:42.266 "trsvcid": "$NVMF_PORT", 00:17:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.266 "hdgst": ${hdgst:-false}, 00:17:42.266 "ddgst": ${ddgst:-false} 00:17:42.266 }, 00:17:42.266 "method": "bdev_nvme_attach_controller" 00:17:42.266 } 00:17:42.266 EOF 00:17:42.266 )") 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:42.266 { 00:17:42.266 "params": { 00:17:42.266 "name": "Nvme$subsystem", 00:17:42.266 "trtype": "$TEST_TRANSPORT", 00:17:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.266 "adrfam": "ipv4", 00:17:42.266 "trsvcid": "$NVMF_PORT", 00:17:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.266 "hdgst": ${hdgst:-false}, 00:17:42.266 "ddgst": ${ddgst:-false} 00:17:42.266 }, 00:17:42.266 "method": "bdev_nvme_attach_controller" 00:17:42.266 } 00:17:42.266 EOF 00:17:42.266 )") 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:42.266 { 00:17:42.266 "params": { 00:17:42.266 "name": "Nvme$subsystem", 00:17:42.266 "trtype": "$TEST_TRANSPORT", 00:17:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.266 "adrfam": "ipv4", 00:17:42.266 "trsvcid": "$NVMF_PORT", 00:17:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.266 "hdgst": ${hdgst:-false}, 00:17:42.266 "ddgst": ${ddgst:-false} 00:17:42.266 }, 00:17:42.266 "method": "bdev_nvme_attach_controller" 00:17:42.266 } 00:17:42.266 EOF 00:17:42.266 )") 00:17:42.266 [2024-11-27 12:56:08.457758] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:17:42.266 [2024-11-27 12:56:08.457810] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:42.266 { 00:17:42.266 "params": { 00:17:42.266 "name": "Nvme$subsystem", 00:17:42.266 "trtype": "$TEST_TRANSPORT", 00:17:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.266 "adrfam": "ipv4", 00:17:42.266 "trsvcid": "$NVMF_PORT", 00:17:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.266 "hdgst": ${hdgst:-false}, 00:17:42.266 "ddgst": ${ddgst:-false} 00:17:42.266 }, 00:17:42.266 "method": "bdev_nvme_attach_controller" 00:17:42.266 } 00:17:42.266 EOF 00:17:42.266 )") 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:42.266 { 00:17:42.266 "params": { 00:17:42.266 "name": "Nvme$subsystem", 00:17:42.266 "trtype": "$TEST_TRANSPORT", 00:17:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.266 "adrfam": "ipv4", 00:17:42.266 "trsvcid": "$NVMF_PORT", 00:17:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.266 "hdgst": ${hdgst:-false}, 00:17:42.266 "ddgst": ${ddgst:-false} 00:17:42.266 }, 00:17:42.266 "method": "bdev_nvme_attach_controller" 00:17:42.266 } 00:17:42.266 EOF 00:17:42.266 )") 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:42.266 { 00:17:42.266 "params": { 00:17:42.266 "name": "Nvme$subsystem", 00:17:42.266 "trtype": "$TEST_TRANSPORT", 00:17:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.266 "adrfam": "ipv4", 00:17:42.266 "trsvcid": "$NVMF_PORT", 00:17:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.266 "hdgst": ${hdgst:-false}, 00:17:42.266 "ddgst": ${ddgst:-false} 00:17:42.266 }, 00:17:42.266 "method": "bdev_nvme_attach_controller" 00:17:42.266 } 00:17:42.266 EOF 00:17:42.266 )") 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:42.266 { 00:17:42.266 "params": { 00:17:42.266 "name": "Nvme$subsystem", 00:17:42.266 "trtype": "$TEST_TRANSPORT", 00:17:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.266 "adrfam": "ipv4", 00:17:42.266 "trsvcid": "$NVMF_PORT", 00:17:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.266 "hdgst": ${hdgst:-false}, 00:17:42.266 "ddgst": ${ddgst:-false} 00:17:42.266 }, 00:17:42.266 "method": "bdev_nvme_attach_controller" 00:17:42.266 } 00:17:42.266 EOF 00:17:42.266 )") 00:17:42.266 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:42.267 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:17:42.267 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:17:42.267 12:56:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:42.267 "params": { 00:17:42.267 "name": "Nvme1", 00:17:42.267 "trtype": "rdma", 00:17:42.267 "traddr": "192.168.100.8", 00:17:42.267 "adrfam": "ipv4", 00:17:42.267 "trsvcid": "4420", 00:17:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:42.267 "hdgst": false, 00:17:42.267 "ddgst": false 00:17:42.267 }, 00:17:42.267 "method": "bdev_nvme_attach_controller" 00:17:42.267 },{ 00:17:42.267 "params": { 00:17:42.267 "name": "Nvme2", 00:17:42.267 "trtype": "rdma", 00:17:42.267 "traddr": "192.168.100.8", 00:17:42.267 "adrfam": "ipv4", 00:17:42.267 "trsvcid": "4420", 00:17:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:42.267 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:42.267 "hdgst": false, 00:17:42.267 "ddgst": false 00:17:42.267 }, 00:17:42.267 "method": "bdev_nvme_attach_controller" 00:17:42.267 },{ 00:17:42.267 "params": { 00:17:42.267 "name": "Nvme3", 00:17:42.267 "trtype": "rdma", 00:17:42.267 "traddr": "192.168.100.8", 00:17:42.267 "adrfam": "ipv4", 00:17:42.267 "trsvcid": "4420", 00:17:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:42.267 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:42.267 "hdgst": false, 00:17:42.267 "ddgst": false 00:17:42.267 }, 00:17:42.267 "method": "bdev_nvme_attach_controller" 00:17:42.267 },{ 00:17:42.267 "params": { 00:17:42.267 "name": "Nvme4", 00:17:42.267 "trtype": "rdma", 00:17:42.267 "traddr": "192.168.100.8", 00:17:42.267 "adrfam": "ipv4", 00:17:42.267 "trsvcid": "4420", 00:17:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:42.267 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:42.267 "hdgst": false, 00:17:42.267 "ddgst": false 00:17:42.267 }, 00:17:42.267 "method": "bdev_nvme_attach_controller" 00:17:42.267 },{ 00:17:42.267 "params": { 00:17:42.267 "name": "Nvme5", 00:17:42.267 "trtype": "rdma", 00:17:42.267 "traddr": "192.168.100.8", 00:17:42.267 "adrfam": "ipv4", 00:17:42.267 "trsvcid": "4420", 00:17:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:42.267 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:42.267 "hdgst": false, 00:17:42.267 "ddgst": false 00:17:42.267 }, 00:17:42.267 "method": "bdev_nvme_attach_controller" 00:17:42.267 },{ 00:17:42.267 "params": { 00:17:42.267 "name": "Nvme6", 00:17:42.267 "trtype": "rdma", 00:17:42.267 "traddr": "192.168.100.8", 00:17:42.267 "adrfam": "ipv4", 00:17:42.267 "trsvcid": "4420", 00:17:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:42.267 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:42.267 "hdgst": false, 00:17:42.267 "ddgst": false 00:17:42.267 }, 00:17:42.267 "method": "bdev_nvme_attach_controller" 00:17:42.267 },{ 00:17:42.267 "params": { 00:17:42.267 "name": "Nvme7", 00:17:42.267 "trtype": "rdma", 00:17:42.267 "traddr": "192.168.100.8", 00:17:42.267 "adrfam": "ipv4", 00:17:42.267 "trsvcid": "4420", 00:17:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:42.267 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:42.267 "hdgst": false, 00:17:42.267 "ddgst": false 00:17:42.267 }, 00:17:42.267 "method": "bdev_nvme_attach_controller" 00:17:42.267 },{ 00:17:42.267 "params": { 00:17:42.267 "name": "Nvme8", 00:17:42.267 "trtype": "rdma", 00:17:42.267 "traddr": "192.168.100.8", 00:17:42.267 "adrfam": "ipv4", 00:17:42.267 "trsvcid": "4420", 00:17:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:42.267 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:42.267 "hdgst": false, 00:17:42.267 "ddgst": false 00:17:42.267 }, 00:17:42.267 "method": "bdev_nvme_attach_controller" 00:17:42.267 },{ 00:17:42.267 "params": { 00:17:42.267 "name": "Nvme9", 00:17:42.267 "trtype": "rdma", 00:17:42.267 "traddr": "192.168.100.8", 00:17:42.267 "adrfam": "ipv4", 00:17:42.267 "trsvcid": "4420", 00:17:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:42.267 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:42.267 "hdgst": false, 00:17:42.267 "ddgst": false 00:17:42.267 }, 00:17:42.267 "method": "bdev_nvme_attach_controller" 00:17:42.267 },{ 00:17:42.267 "params": { 00:17:42.267 "name": "Nvme10", 00:17:42.267 "trtype": "rdma", 00:17:42.267 "traddr": "192.168.100.8", 00:17:42.267 "adrfam": "ipv4", 00:17:42.267 "trsvcid": "4420", 00:17:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:42.267 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:42.267 "hdgst": false, 00:17:42.267 "ddgst": false 00:17:42.267 }, 00:17:42.267 "method": "bdev_nvme_attach_controller" 00:17:42.267 }' 00:17:42.267 [2024-11-27 12:56:08.550902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.267 [2024-11-27 12:56:08.590213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.205 12:56:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.205 12:56:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:17:43.205 12:56:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:43.205 12:56:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.205 12:56:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:43.205 12:56:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.205 12:56:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 4185205 00:17:43.205 12:56:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:17:43.205 12:56:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:17:44.144 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 4185205 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 4184893 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:44.144 { 00:17:44.144 "params": { 00:17:44.144 "name": "Nvme$subsystem", 00:17:44.144 "trtype": "$TEST_TRANSPORT", 00:17:44.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.144 "adrfam": "ipv4", 00:17:44.144 "trsvcid": "$NVMF_PORT", 00:17:44.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.144 "hdgst": ${hdgst:-false}, 00:17:44.144 "ddgst": ${ddgst:-false} 00:17:44.144 }, 00:17:44.144 "method": "bdev_nvme_attach_controller" 00:17:44.144 } 00:17:44.144 EOF 00:17:44.144 )") 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:44.144 { 00:17:44.144 "params": { 00:17:44.144 "name": "Nvme$subsystem", 00:17:44.144 "trtype": "$TEST_TRANSPORT", 00:17:44.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.144 "adrfam": "ipv4", 00:17:44.144 "trsvcid": "$NVMF_PORT", 00:17:44.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.144 "hdgst": ${hdgst:-false}, 00:17:44.144 "ddgst": ${ddgst:-false} 00:17:44.144 }, 00:17:44.144 "method": "bdev_nvme_attach_controller" 00:17:44.144 } 00:17:44.144 EOF 00:17:44.144 )") 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:44.144 { 00:17:44.144 "params": { 00:17:44.144 "name": "Nvme$subsystem", 00:17:44.144 "trtype": "$TEST_TRANSPORT", 00:17:44.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.144 "adrfam": "ipv4", 00:17:44.144 "trsvcid": "$NVMF_PORT", 00:17:44.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.144 "hdgst": ${hdgst:-false}, 00:17:44.144 "ddgst": ${ddgst:-false} 00:17:44.144 }, 00:17:44.144 "method": "bdev_nvme_attach_controller" 00:17:44.144 } 00:17:44.144 EOF 00:17:44.144 )") 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:44.144 { 00:17:44.144 "params": { 00:17:44.144 "name": "Nvme$subsystem", 00:17:44.144 "trtype": "$TEST_TRANSPORT", 00:17:44.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.144 "adrfam": "ipv4", 00:17:44.144 "trsvcid": "$NVMF_PORT", 00:17:44.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.144 "hdgst": ${hdgst:-false}, 00:17:44.144 "ddgst": ${ddgst:-false} 00:17:44.144 }, 00:17:44.144 "method": "bdev_nvme_attach_controller" 00:17:44.144 } 00:17:44.144 EOF 00:17:44.144 )") 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:44.144 [2024-11-27 12:56:10.493453] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:17:44.144 [2024-11-27 12:56:10.493508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4185507 ] 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:44.144 { 00:17:44.144 "params": { 00:17:44.144 "name": "Nvme$subsystem", 00:17:44.144 "trtype": "$TEST_TRANSPORT", 00:17:44.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.144 "adrfam": "ipv4", 00:17:44.144 "trsvcid": "$NVMF_PORT", 00:17:44.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.144 "hdgst": ${hdgst:-false}, 00:17:44.144 "ddgst": ${ddgst:-false} 00:17:44.144 }, 00:17:44.144 "method": "bdev_nvme_attach_controller" 00:17:44.144 } 00:17:44.144 EOF 00:17:44.144 )") 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:44.144 { 00:17:44.144 "params": { 00:17:44.144 "name": "Nvme$subsystem", 00:17:44.144 "trtype": "$TEST_TRANSPORT", 00:17:44.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.144 "adrfam": "ipv4", 00:17:44.144 "trsvcid": "$NVMF_PORT", 00:17:44.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.144 "hdgst": ${hdgst:-false}, 00:17:44.144 "ddgst": ${ddgst:-false} 00:17:44.144 }, 00:17:44.144 "method": "bdev_nvme_attach_controller" 00:17:44.144 } 00:17:44.144 EOF 00:17:44.144 )") 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:44.144 { 00:17:44.144 "params": { 00:17:44.144 "name": "Nvme$subsystem", 00:17:44.144 "trtype": "$TEST_TRANSPORT", 00:17:44.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.144 "adrfam": "ipv4", 00:17:44.144 "trsvcid": "$NVMF_PORT", 00:17:44.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.144 "hdgst": ${hdgst:-false}, 00:17:44.144 "ddgst": ${ddgst:-false} 00:17:44.144 }, 00:17:44.144 "method": "bdev_nvme_attach_controller" 00:17:44.144 } 00:17:44.144 EOF 00:17:44.144 )") 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:44.144 { 00:17:44.144 "params": { 00:17:44.144 "name": "Nvme$subsystem", 00:17:44.144 "trtype": "$TEST_TRANSPORT", 00:17:44.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.144 "adrfam": "ipv4", 00:17:44.144 "trsvcid": "$NVMF_PORT", 00:17:44.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.144 "hdgst": ${hdgst:-false}, 00:17:44.144 "ddgst": ${ddgst:-false} 00:17:44.144 }, 00:17:44.144 "method": "bdev_nvme_attach_controller" 00:17:44.144 } 00:17:44.144 EOF 00:17:44.144 )") 00:17:44.144 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:44.404 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:44.404 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:44.404 { 00:17:44.404 "params": { 00:17:44.404 "name": "Nvme$subsystem", 00:17:44.404 "trtype": "$TEST_TRANSPORT", 00:17:44.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.404 "adrfam": "ipv4", 00:17:44.404 "trsvcid": "$NVMF_PORT", 00:17:44.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.404 "hdgst": ${hdgst:-false}, 00:17:44.404 "ddgst": ${ddgst:-false} 00:17:44.404 }, 00:17:44.404 "method": "bdev_nvme_attach_controller" 00:17:44.404 } 00:17:44.404 EOF 00:17:44.404 )") 00:17:44.404 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:44.404 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:44.404 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:44.404 { 00:17:44.404 "params": { 00:17:44.404 "name": "Nvme$subsystem", 00:17:44.404 "trtype": "$TEST_TRANSPORT", 00:17:44.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.404 "adrfam": "ipv4", 00:17:44.404 "trsvcid": "$NVMF_PORT", 00:17:44.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.404 "hdgst": ${hdgst:-false}, 00:17:44.404 "ddgst": ${ddgst:-false} 00:17:44.404 }, 00:17:44.404 "method": "bdev_nvme_attach_controller" 00:17:44.404 } 00:17:44.404 EOF 00:17:44.404 )") 00:17:44.404 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:44.404 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:17:44.404 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:17:44.404 12:56:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:44.404 "params": { 00:17:44.404 "name": "Nvme1", 00:17:44.404 "trtype": "rdma", 00:17:44.404 "traddr": "192.168.100.8", 00:17:44.404 "adrfam": "ipv4", 00:17:44.404 "trsvcid": "4420", 00:17:44.404 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.404 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:44.404 "hdgst": false, 00:17:44.404 "ddgst": false 00:17:44.404 }, 00:17:44.404 "method": "bdev_nvme_attach_controller" 00:17:44.404 },{ 00:17:44.404 "params": { 00:17:44.404 "name": "Nvme2", 00:17:44.404 "trtype": "rdma", 00:17:44.404 "traddr": "192.168.100.8", 00:17:44.404 "adrfam": "ipv4", 00:17:44.404 "trsvcid": "4420", 00:17:44.404 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:44.404 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:44.404 "hdgst": false, 00:17:44.404 "ddgst": false 00:17:44.404 }, 00:17:44.404 "method": "bdev_nvme_attach_controller" 00:17:44.404 },{ 00:17:44.404 "params": { 00:17:44.404 "name": "Nvme3", 00:17:44.404 "trtype": "rdma", 00:17:44.404 "traddr": "192.168.100.8", 00:17:44.404 "adrfam": "ipv4", 00:17:44.404 "trsvcid": "4420", 00:17:44.404 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:44.404 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:44.404 "hdgst": false, 00:17:44.404 "ddgst": false 00:17:44.404 }, 00:17:44.404 "method": "bdev_nvme_attach_controller" 00:17:44.404 },{ 00:17:44.405 "params": { 00:17:44.405 "name": "Nvme4", 00:17:44.405 "trtype": "rdma", 00:17:44.405 "traddr": "192.168.100.8", 00:17:44.405 "adrfam": "ipv4", 00:17:44.405 "trsvcid": "4420", 00:17:44.405 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:44.405 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:44.405 "hdgst": false, 00:17:44.405 "ddgst": false 00:17:44.405 }, 00:17:44.405 "method": "bdev_nvme_attach_controller" 00:17:44.405 },{ 00:17:44.405 "params": { 00:17:44.405 "name": "Nvme5", 00:17:44.405 "trtype": "rdma", 00:17:44.405 "traddr": "192.168.100.8", 00:17:44.405 "adrfam": "ipv4", 00:17:44.405 "trsvcid": "4420", 00:17:44.405 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:44.405 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:44.405 "hdgst": false, 00:17:44.405 "ddgst": false 00:17:44.405 }, 00:17:44.405 "method": "bdev_nvme_attach_controller" 00:17:44.405 },{ 00:17:44.405 "params": { 00:17:44.405 "name": "Nvme6", 00:17:44.405 "trtype": "rdma", 00:17:44.405 "traddr": "192.168.100.8", 00:17:44.405 "adrfam": "ipv4", 00:17:44.405 "trsvcid": "4420", 00:17:44.405 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:44.405 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:44.405 "hdgst": false, 00:17:44.405 "ddgst": false 00:17:44.405 }, 00:17:44.405 "method": "bdev_nvme_attach_controller" 00:17:44.405 },{ 00:17:44.405 "params": { 00:17:44.405 "name": "Nvme7", 00:17:44.405 "trtype": "rdma", 00:17:44.405 "traddr": "192.168.100.8", 00:17:44.405 "adrfam": "ipv4", 00:17:44.405 "trsvcid": "4420", 00:17:44.405 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:44.405 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:44.405 "hdgst": false, 00:17:44.405 "ddgst": false 00:17:44.405 }, 00:17:44.405 "method": "bdev_nvme_attach_controller" 00:17:44.405 },{ 00:17:44.405 "params": { 00:17:44.405 "name": "Nvme8", 00:17:44.405 "trtype": "rdma", 00:17:44.405 "traddr": "192.168.100.8", 00:17:44.405 "adrfam": "ipv4", 00:17:44.405 "trsvcid": "4420", 00:17:44.405 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:44.405 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:44.405 "hdgst": false, 00:17:44.405 "ddgst": false 00:17:44.405 }, 00:17:44.405 "method": "bdev_nvme_attach_controller" 00:17:44.405 },{ 00:17:44.405 "params": { 00:17:44.405 "name": "Nvme9", 00:17:44.405 "trtype": "rdma", 00:17:44.405 "traddr": "192.168.100.8", 00:17:44.405 "adrfam": "ipv4", 00:17:44.405 "trsvcid": "4420", 00:17:44.405 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:44.405 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:44.405 "hdgst": false, 00:17:44.405 "ddgst": false 00:17:44.405 }, 00:17:44.405 "method": "bdev_nvme_attach_controller" 00:17:44.405 },{ 00:17:44.405 "params": { 00:17:44.405 "name": "Nvme10", 00:17:44.405 "trtype": "rdma", 00:17:44.405 "traddr": "192.168.100.8", 00:17:44.405 "adrfam": "ipv4", 00:17:44.405 "trsvcid": "4420", 00:17:44.405 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:44.405 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:44.405 "hdgst": false, 00:17:44.405 "ddgst": false 00:17:44.405 }, 00:17:44.405 "method": "bdev_nvme_attach_controller" 00:17:44.405 }' 00:17:44.405 [2024-11-27 12:56:10.586580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.405 [2024-11-27 12:56:10.625972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.342 Running I/O for 1 seconds... 00:17:46.539 3367.00 IOPS, 210.44 MiB/s 00:17:46.539 Latency(us) 00:17:46.539 [2024-11-27T11:56:12.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.539 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:46.539 Verification LBA range: start 0x0 length 0x400 00:17:46.539 Nvme1n1 : 1.17 355.70 22.23 0.00 0.00 176373.76 20237.52 221459.25 00:17:46.539 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:46.539 Verification LBA range: start 0x0 length 0x400 00:17:46.539 Nvme2n1 : 1.17 349.40 21.84 0.00 0.00 176695.70 20342.37 209715.20 00:17:46.539 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:46.539 Verification LBA range: start 0x0 length 0x400 00:17:46.539 Nvme3n1 : 1.17 382.33 23.90 0.00 0.00 159623.46 6815.74 144284.06 00:17:46.539 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:46.539 Verification LBA range: start 0x0 length 0x400 00:17:46.539 Nvme4n1 : 1.17 381.93 23.87 0.00 0.00 157648.84 11796.48 137573.17 00:17:46.539 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:46.539 Verification LBA range: start 0x0 length 0x400 00:17:46.539 Nvme5n1 : 1.17 381.45 23.84 0.00 0.00 156424.60 19293.80 126667.98 00:17:46.539 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:46.539 Verification LBA range: start 0x0 length 0x400 00:17:46.539 Nvme6n1 : 1.18 383.62 23.98 0.00 0.00 152667.23 4666.16 119957.09 00:17:46.539 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:46.539 Verification LBA range: start 0x0 length 0x400 00:17:46.539 Nvme7n1 : 1.18 399.32 24.96 0.00 0.00 145709.19 4639.95 110729.63 00:17:46.539 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:46.539 Verification LBA range: start 0x0 length 0x400 00:17:46.539 Nvme8n1 : 1.18 403.17 25.20 0.00 0.00 142340.15 4823.45 103179.88 00:17:46.539 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:46.539 Verification LBA range: start 0x0 length 0x400 00:17:46.539 Nvme9n1 : 1.18 379.92 23.75 0.00 0.00 149232.84 10590.62 98146.71 00:17:46.539 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:46.539 Verification LBA range: start 0x0 length 0x400 00:17:46.539 Nvme10n1 : 1.18 379.39 23.71 0.00 0.00 147137.33 11586.76 114085.07 00:17:46.539 [2024-11-27T11:56:12.924Z] =================================================================================================================== 00:17:46.539 [2024-11-27T11:56:12.924Z] Total : 3796.24 237.27 0.00 0.00 155913.55 4639.95 221459.25 00:17:46.798 12:56:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:17:46.798 12:56:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:17:46.798 12:56:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:46.798 12:56:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:46.798 12:56:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:17:46.799 12:56:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:46.799 12:56:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:17:46.799 12:56:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:46.799 12:56:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:46.799 12:56:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:17:46.799 12:56:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:46.799 12:56:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:46.799 rmmod nvme_rdma 00:17:46.799 rmmod nvme_fabrics 00:17:46.799 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:46.799 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:17:46.799 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:17:46.799 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 4184893 ']' 00:17:46.799 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 4184893 00:17:46.799 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 4184893 ']' 00:17:46.799 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 4184893 00:17:46.799 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:17:46.799 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.799 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4184893 00:17:46.799 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:46.799 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:46.799 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4184893' 00:17:46.799 killing process with pid 4184893 00:17:46.799 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 4184893 00:17:46.799 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 4184893 00:17:47.368 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:47.368 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:47.368 00:17:47.368 real 0m14.571s 00:17:47.368 user 0m29.063s 00:17:47.368 sys 0m7.479s 00:17:47.368 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.368 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:47.368 ************************************ 00:17:47.368 END TEST nvmf_shutdown_tc1 00:17:47.368 ************************************ 00:17:47.368 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:47.369 ************************************ 00:17:47.369 START TEST nvmf_shutdown_tc2 00:17:47.369 ************************************ 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:47.369 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:47.369 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:47.369 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:47.369 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:47.369 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:47.370 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:47.370 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:47.370 altname enp217s0f0np0 00:17:47.370 altname ens818f0np0 00:17:47.370 inet 192.168.100.8/24 scope global mlx_0_0 00:17:47.370 valid_lft forever preferred_lft forever 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:47.370 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:47.370 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:47.370 altname enp217s0f1np1 00:17:47.370 altname ens818f1np1 00:17:47.370 inet 192.168.100.9/24 scope global mlx_0_1 00:17:47.370 valid_lft forever preferred_lft forever 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:47.370 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:47.630 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:47.630 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:47.630 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:47.630 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:47.630 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:47.630 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:17:47.630 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:47.630 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:47.630 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:47.630 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:47.630 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:47.630 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:47.630 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:17:47.630 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:47.630 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:47.630 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:47.631 192.168.100.9' 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:47.631 192.168.100.9' 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:47.631 192.168.100.9' 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4186190 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4186190 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4186190 ']' 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:47.631 12:56:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:47.631 [2024-11-27 12:56:13.878569] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:17:47.631 [2024-11-27 12:56:13.878624] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.631 [2024-11-27 12:56:13.968566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:47.631 [2024-11-27 12:56:14.009877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.631 [2024-11-27 12:56:14.009918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.631 [2024-11-27 12:56:14.009928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.631 [2024-11-27 12:56:14.009936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.631 [2024-11-27 12:56:14.009943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.631 [2024-11-27 12:56:14.011583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.631 [2024-11-27 12:56:14.011676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:47.631 [2024-11-27 12:56:14.011796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.631 [2024-11-27 12:56:14.011798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:48.569 [2024-11-27 12:56:14.777677] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5a00f0/0x5a45e0) succeed. 00:17:48.569 [2024-11-27 12:56:14.787049] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5a1780/0x5e5c80) succeed. 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:48.569 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:48.829 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:48.829 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:48.829 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:48.829 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:48.829 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:48.829 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:48.829 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:48.829 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:48.829 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:17:48.829 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.829 12:56:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:48.829 Malloc1 00:17:48.829 [2024-11-27 12:56:15.019941] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:48.829 Malloc2 00:17:48.829 Malloc3 00:17:48.829 Malloc4 00:17:48.829 Malloc5 00:17:49.088 Malloc6 00:17:49.088 Malloc7 00:17:49.089 Malloc8 00:17:49.089 Malloc9 00:17:49.089 Malloc10 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=4186520 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 4186520 /var/tmp/bdevperf.sock 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4186520 ']' 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:49.089 { 00:17:49.089 "params": { 00:17:49.089 "name": "Nvme$subsystem", 00:17:49.089 "trtype": "$TEST_TRANSPORT", 00:17:49.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.089 "adrfam": "ipv4", 00:17:49.089 "trsvcid": "$NVMF_PORT", 00:17:49.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.089 "hdgst": ${hdgst:-false}, 00:17:49.089 "ddgst": ${ddgst:-false} 00:17:49.089 }, 00:17:49.089 "method": "bdev_nvme_attach_controller" 00:17:49.089 } 00:17:49.089 EOF 00:17:49.089 )") 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:49.089 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:49.089 { 00:17:49.089 "params": { 00:17:49.089 "name": "Nvme$subsystem", 00:17:49.089 "trtype": "$TEST_TRANSPORT", 00:17:49.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.089 "adrfam": "ipv4", 00:17:49.089 "trsvcid": "$NVMF_PORT", 00:17:49.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.089 "hdgst": ${hdgst:-false}, 00:17:49.089 "ddgst": ${ddgst:-false} 00:17:49.089 }, 00:17:49.089 "method": "bdev_nvme_attach_controller" 00:17:49.089 } 00:17:49.089 EOF 00:17:49.089 )") 00:17:49.349 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:49.349 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:49.349 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:49.349 { 00:17:49.349 "params": { 00:17:49.349 "name": "Nvme$subsystem", 00:17:49.349 "trtype": "$TEST_TRANSPORT", 00:17:49.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.349 "adrfam": "ipv4", 00:17:49.349 "trsvcid": "$NVMF_PORT", 00:17:49.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.349 "hdgst": ${hdgst:-false}, 00:17:49.349 "ddgst": ${ddgst:-false} 00:17:49.349 }, 00:17:49.349 "method": "bdev_nvme_attach_controller" 00:17:49.349 } 00:17:49.349 EOF 00:17:49.349 )") 00:17:49.349 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:49.349 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:49.349 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:49.349 { 00:17:49.349 "params": { 00:17:49.349 "name": "Nvme$subsystem", 00:17:49.349 "trtype": "$TEST_TRANSPORT", 00:17:49.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.349 "adrfam": "ipv4", 00:17:49.349 "trsvcid": "$NVMF_PORT", 00:17:49.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.349 "hdgst": ${hdgst:-false}, 00:17:49.349 "ddgst": ${ddgst:-false} 00:17:49.349 }, 00:17:49.349 "method": "bdev_nvme_attach_controller" 00:17:49.349 } 00:17:49.349 EOF 00:17:49.349 )") 00:17:49.349 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:49.349 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:49.349 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:49.349 { 00:17:49.349 "params": { 00:17:49.349 "name": "Nvme$subsystem", 00:17:49.349 "trtype": "$TEST_TRANSPORT", 00:17:49.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.349 "adrfam": "ipv4", 00:17:49.349 "trsvcid": "$NVMF_PORT", 00:17:49.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.349 "hdgst": ${hdgst:-false}, 00:17:49.349 "ddgst": ${ddgst:-false} 00:17:49.349 }, 00:17:49.349 "method": "bdev_nvme_attach_controller" 00:17:49.349 } 00:17:49.349 EOF 00:17:49.349 )") 00:17:49.349 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:49.349 [2024-11-27 12:56:15.503038] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:17:49.349 [2024-11-27 12:56:15.503091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4186520 ] 00:17:49.349 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:49.349 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:49.349 { 00:17:49.349 "params": { 00:17:49.349 "name": "Nvme$subsystem", 00:17:49.349 "trtype": "$TEST_TRANSPORT", 00:17:49.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.349 "adrfam": "ipv4", 00:17:49.349 "trsvcid": "$NVMF_PORT", 00:17:49.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.349 "hdgst": ${hdgst:-false}, 00:17:49.349 "ddgst": ${ddgst:-false} 00:17:49.349 }, 00:17:49.349 "method": "bdev_nvme_attach_controller" 00:17:49.349 } 00:17:49.349 EOF 00:17:49.349 )") 00:17:49.349 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:49.350 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:49.350 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:49.350 { 00:17:49.350 "params": { 00:17:49.350 "name": "Nvme$subsystem", 00:17:49.350 "trtype": "$TEST_TRANSPORT", 00:17:49.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.350 "adrfam": "ipv4", 00:17:49.350 "trsvcid": "$NVMF_PORT", 00:17:49.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.350 "hdgst": ${hdgst:-false}, 00:17:49.350 "ddgst": ${ddgst:-false} 00:17:49.350 }, 00:17:49.350 "method": "bdev_nvme_attach_controller" 00:17:49.350 } 00:17:49.350 EOF 00:17:49.350 )") 00:17:49.350 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:49.350 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:49.350 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:49.350 { 00:17:49.350 "params": { 00:17:49.350 "name": "Nvme$subsystem", 00:17:49.350 "trtype": "$TEST_TRANSPORT", 00:17:49.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.350 "adrfam": "ipv4", 00:17:49.350 "trsvcid": "$NVMF_PORT", 00:17:49.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.350 "hdgst": ${hdgst:-false}, 00:17:49.350 "ddgst": ${ddgst:-false} 00:17:49.350 }, 00:17:49.350 "method": "bdev_nvme_attach_controller" 00:17:49.350 } 00:17:49.350 EOF 00:17:49.350 )") 00:17:49.350 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:49.350 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:49.350 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:49.350 { 00:17:49.350 "params": { 00:17:49.350 "name": "Nvme$subsystem", 00:17:49.350 "trtype": "$TEST_TRANSPORT", 00:17:49.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.350 "adrfam": "ipv4", 00:17:49.350 "trsvcid": "$NVMF_PORT", 00:17:49.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.350 "hdgst": ${hdgst:-false}, 00:17:49.350 "ddgst": ${ddgst:-false} 00:17:49.350 }, 00:17:49.350 "method": "bdev_nvme_attach_controller" 00:17:49.350 } 00:17:49.350 EOF 00:17:49.350 )") 00:17:49.350 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:49.350 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:49.350 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:49.350 { 00:17:49.350 "params": { 00:17:49.350 "name": "Nvme$subsystem", 00:17:49.350 "trtype": "$TEST_TRANSPORT", 00:17:49.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.350 "adrfam": "ipv4", 00:17:49.350 "trsvcid": "$NVMF_PORT", 00:17:49.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.350 "hdgst": ${hdgst:-false}, 00:17:49.350 "ddgst": ${ddgst:-false} 00:17:49.350 }, 00:17:49.350 "method": "bdev_nvme_attach_controller" 00:17:49.350 } 00:17:49.350 EOF 00:17:49.350 )") 00:17:49.350 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:49.350 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:17:49.350 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:17:49.350 12:56:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:49.350 "params": { 00:17:49.350 "name": "Nvme1", 00:17:49.350 "trtype": "rdma", 00:17:49.350 "traddr": "192.168.100.8", 00:17:49.350 "adrfam": "ipv4", 00:17:49.350 "trsvcid": "4420", 00:17:49.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.350 "hdgst": false, 00:17:49.350 "ddgst": false 00:17:49.350 }, 00:17:49.350 "method": "bdev_nvme_attach_controller" 00:17:49.350 },{ 00:17:49.350 "params": { 00:17:49.350 "name": "Nvme2", 00:17:49.350 "trtype": "rdma", 00:17:49.350 "traddr": "192.168.100.8", 00:17:49.350 "adrfam": "ipv4", 00:17:49.350 "trsvcid": "4420", 00:17:49.350 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:49.350 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:49.350 "hdgst": false, 00:17:49.350 "ddgst": false 00:17:49.350 }, 00:17:49.350 "method": "bdev_nvme_attach_controller" 00:17:49.350 },{ 00:17:49.350 "params": { 00:17:49.350 "name": "Nvme3", 00:17:49.350 "trtype": "rdma", 00:17:49.350 "traddr": "192.168.100.8", 00:17:49.350 "adrfam": "ipv4", 00:17:49.350 "trsvcid": "4420", 00:17:49.350 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:49.350 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:49.350 "hdgst": false, 00:17:49.350 "ddgst": false 00:17:49.350 }, 00:17:49.350 "method": "bdev_nvme_attach_controller" 00:17:49.350 },{ 00:17:49.350 "params": { 00:17:49.350 "name": "Nvme4", 00:17:49.350 "trtype": "rdma", 00:17:49.350 "traddr": "192.168.100.8", 00:17:49.350 "adrfam": "ipv4", 00:17:49.350 "trsvcid": "4420", 00:17:49.350 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:49.350 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:49.350 "hdgst": false, 00:17:49.350 "ddgst": false 00:17:49.350 }, 00:17:49.350 "method": "bdev_nvme_attach_controller" 00:17:49.350 },{ 00:17:49.350 "params": { 00:17:49.350 "name": "Nvme5", 00:17:49.350 "trtype": "rdma", 00:17:49.350 "traddr": "192.168.100.8", 00:17:49.350 "adrfam": "ipv4", 00:17:49.350 "trsvcid": "4420", 00:17:49.350 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:49.350 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:49.350 "hdgst": false, 00:17:49.350 "ddgst": false 00:17:49.350 }, 00:17:49.350 "method": "bdev_nvme_attach_controller" 00:17:49.350 },{ 00:17:49.350 "params": { 00:17:49.350 "name": "Nvme6", 00:17:49.350 "trtype": "rdma", 00:17:49.350 "traddr": "192.168.100.8", 00:17:49.350 "adrfam": "ipv4", 00:17:49.350 "trsvcid": "4420", 00:17:49.350 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:49.350 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:49.350 "hdgst": false, 00:17:49.350 "ddgst": false 00:17:49.350 }, 00:17:49.350 "method": "bdev_nvme_attach_controller" 00:17:49.350 },{ 00:17:49.350 "params": { 00:17:49.350 "name": "Nvme7", 00:17:49.350 "trtype": "rdma", 00:17:49.350 "traddr": "192.168.100.8", 00:17:49.350 "adrfam": "ipv4", 00:17:49.350 "trsvcid": "4420", 00:17:49.350 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:49.350 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:49.350 "hdgst": false, 00:17:49.350 "ddgst": false 00:17:49.350 }, 00:17:49.350 "method": "bdev_nvme_attach_controller" 00:17:49.350 },{ 00:17:49.350 "params": { 00:17:49.350 "name": "Nvme8", 00:17:49.350 "trtype": "rdma", 00:17:49.350 "traddr": "192.168.100.8", 00:17:49.350 "adrfam": "ipv4", 00:17:49.350 "trsvcid": "4420", 00:17:49.350 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:49.350 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:49.350 "hdgst": false, 00:17:49.350 "ddgst": false 00:17:49.350 }, 00:17:49.350 "method": "bdev_nvme_attach_controller" 00:17:49.350 },{ 00:17:49.350 "params": { 00:17:49.350 "name": "Nvme9", 00:17:49.350 "trtype": "rdma", 00:17:49.350 "traddr": "192.168.100.8", 00:17:49.350 "adrfam": "ipv4", 00:17:49.350 "trsvcid": "4420", 00:17:49.350 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:49.350 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:49.350 "hdgst": false, 00:17:49.350 "ddgst": false 00:17:49.350 }, 00:17:49.350 "method": "bdev_nvme_attach_controller" 00:17:49.350 },{ 00:17:49.350 "params": { 00:17:49.350 "name": "Nvme10", 00:17:49.350 "trtype": "rdma", 00:17:49.350 "traddr": "192.168.100.8", 00:17:49.350 "adrfam": "ipv4", 00:17:49.350 "trsvcid": "4420", 00:17:49.350 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:49.350 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:49.350 "hdgst": false, 00:17:49.350 "ddgst": false 00:17:49.350 }, 00:17:49.350 "method": "bdev_nvme_attach_controller" 00:17:49.350 }' 00:17:49.350 [2024-11-27 12:56:15.594482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.350 [2024-11-27 12:56:15.634021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.289 Running I/O for 10 seconds... 00:17:50.289 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.289 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:17:50.289 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:50.289 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.289 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:50.548 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.548 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:50.548 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:50.548 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:17:50.548 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:17:50.548 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:17:50.548 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:17:50.548 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:17:50.548 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:50.548 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.548 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:50.548 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:17:50.548 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.548 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:17:50.548 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:17:50.548 12:56:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=147 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 147 -ge 100 ']' 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 4186520 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 4186520 ']' 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 4186520 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.806 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4186520 00:17:51.064 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.064 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.064 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4186520' 00:17:51.064 killing process with pid 4186520 00:17:51.064 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 4186520 00:17:51.064 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 4186520 00:17:51.064 Received shutdown signal, test time was about 0.829748 seconds 00:17:51.064 00:17:51.064 Latency(us) 00:17:51.064 [2024-11-27T11:56:17.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.064 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:51.064 Verification LBA range: start 0x0 length 0x400 00:17:51.064 Nvme1n1 : 0.81 363.22 22.70 0.00 0.00 172336.30 6710.89 234881.02 00:17:51.064 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:51.064 Verification LBA range: start 0x0 length 0x400 00:17:51.064 Nvme2n1 : 0.82 392.08 24.51 0.00 0.00 156577.96 5767.17 163577.86 00:17:51.064 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:51.064 Verification LBA range: start 0x0 length 0x400 00:17:51.064 Nvme3n1 : 0.82 391.51 24.47 0.00 0.00 153774.98 8441.04 156028.11 00:17:51.064 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:51.064 Verification LBA range: start 0x0 length 0x400 00:17:51.064 Nvme4n1 : 0.82 390.93 24.43 0.00 0.00 150994.12 8703.18 150156.08 00:17:51.064 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:51.064 Verification LBA range: start 0x0 length 0x400 00:17:51.064 Nvme5n1 : 0.82 390.23 24.39 0.00 0.00 148653.83 9279.90 139250.89 00:17:51.064 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:51.064 Verification LBA range: start 0x0 length 0x400 00:17:51.064 Nvme6n1 : 0.82 389.66 24.35 0.00 0.00 145373.02 9699.33 131701.15 00:17:51.064 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:51.064 Verification LBA range: start 0x0 length 0x400 00:17:51.064 Nvme7n1 : 0.82 389.11 24.32 0.00 0.00 142498.37 9909.04 124990.26 00:17:51.064 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:51.064 Verification LBA range: start 0x0 length 0x400 00:17:51.064 Nvme8n1 : 0.82 388.53 24.28 0.00 0.00 139797.63 10276.04 116601.65 00:17:51.064 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:51.064 Verification LBA range: start 0x0 length 0x400 00:17:51.064 Nvme9n1 : 0.83 387.74 24.23 0.00 0.00 137912.81 11062.48 102341.02 00:17:51.064 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:51.064 Verification LBA range: start 0x0 length 0x400 00:17:51.064 Nvme10n1 : 0.83 308.76 19.30 0.00 0.00 168949.20 3040.87 234881.02 00:17:51.064 [2024-11-27T11:56:17.449Z] =================================================================================================================== 00:17:51.064 [2024-11-27T11:56:17.449Z] Total : 3791.79 236.99 0.00 0.00 151172.56 3040.87 234881.02 00:17:51.323 12:56:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:17:52.258 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 4186190 00:17:52.258 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:17:52.258 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:17:52.258 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:52.258 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:52.258 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:17:52.258 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:52.258 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:17:52.258 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:52.258 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:52.258 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:17:52.258 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:52.258 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:52.258 rmmod nvme_rdma 00:17:52.517 rmmod nvme_fabrics 00:17:52.517 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:52.517 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:17:52.517 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:17:52.517 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 4186190 ']' 00:17:52.517 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 4186190 00:17:52.517 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 4186190 ']' 00:17:52.517 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 4186190 00:17:52.517 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:17:52.518 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.518 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4186190 00:17:52.518 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:52.518 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:52.518 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4186190' 00:17:52.518 killing process with pid 4186190 00:17:52.518 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 4186190 00:17:52.518 12:56:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 4186190 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:53.087 00:17:53.087 real 0m5.557s 00:17:53.087 user 0m22.538s 00:17:53.087 sys 0m1.225s 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:53.087 ************************************ 00:17:53.087 END TEST nvmf_shutdown_tc2 00:17:53.087 ************************************ 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:53.087 ************************************ 00:17:53.087 START TEST nvmf_shutdown_tc3 00:17:53.087 ************************************ 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:53.087 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:53.087 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:53.087 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.087 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:53.087 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:53.088 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:53.088 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:53.088 altname enp217s0f0np0 00:17:53.088 altname ens818f0np0 00:17:53.088 inet 192.168.100.8/24 scope global mlx_0_0 00:17:53.088 valid_lft forever preferred_lft forever 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:53.088 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:53.088 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:53.088 altname enp217s0f1np1 00:17:53.088 altname ens818f1np1 00:17:53.088 inet 192.168.100.9/24 scope global mlx_0_1 00:17:53.088 valid_lft forever preferred_lft forever 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:53.088 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:53.347 192.168.100.9' 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:53.347 192.168.100.9' 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:53.347 192.168.100.9' 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=4187374 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 4187374 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 4187374 ']' 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.347 12:56:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:53.347 [2024-11-27 12:56:19.573669] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:17:53.347 [2024-11-27 12:56:19.573730] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.347 [2024-11-27 12:56:19.662969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.347 [2024-11-27 12:56:19.702390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.347 [2024-11-27 12:56:19.702430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.347 [2024-11-27 12:56:19.702439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.347 [2024-11-27 12:56:19.702447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.347 [2024-11-27 12:56:19.702470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.347 [2024-11-27 12:56:19.704301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.347 [2024-11-27 12:56:19.704383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.348 [2024-11-27 12:56:19.704496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.348 [2024-11-27 12:56:19.704497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:54.281 [2024-11-27 12:56:20.494084] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6f30f0/0x6f75e0) succeed. 00:17:54.281 [2024-11-27 12:56:20.503145] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6f4780/0x738c80) succeed. 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:54.281 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.539 12:56:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:54.539 Malloc1 00:17:54.539 [2024-11-27 12:56:20.745208] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:54.539 Malloc2 00:17:54.539 Malloc3 00:17:54.539 Malloc4 00:17:54.539 Malloc5 00:17:54.797 Malloc6 00:17:54.797 Malloc7 00:17:54.797 Malloc8 00:17:54.797 Malloc9 00:17:54.797 Malloc10 00:17:54.797 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.797 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:17:54.797 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.797 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=4187687 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 4187687 /var/tmp/bdevperf.sock 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 4187687 ']' 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:55.057 { 00:17:55.057 "params": { 00:17:55.057 "name": "Nvme$subsystem", 00:17:55.057 "trtype": "$TEST_TRANSPORT", 00:17:55.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:55.057 "adrfam": "ipv4", 00:17:55.057 "trsvcid": "$NVMF_PORT", 00:17:55.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:55.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:55.057 "hdgst": ${hdgst:-false}, 00:17:55.057 "ddgst": ${ddgst:-false} 00:17:55.057 }, 00:17:55.057 "method": "bdev_nvme_attach_controller" 00:17:55.057 } 00:17:55.057 EOF 00:17:55.057 )") 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:55.057 { 00:17:55.057 "params": { 00:17:55.057 "name": "Nvme$subsystem", 00:17:55.057 "trtype": "$TEST_TRANSPORT", 00:17:55.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:55.057 "adrfam": "ipv4", 00:17:55.057 "trsvcid": "$NVMF_PORT", 00:17:55.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:55.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:55.057 "hdgst": ${hdgst:-false}, 00:17:55.057 "ddgst": ${ddgst:-false} 00:17:55.057 }, 00:17:55.057 "method": "bdev_nvme_attach_controller" 00:17:55.057 } 00:17:55.057 EOF 00:17:55.057 )") 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:55.057 { 00:17:55.057 "params": { 00:17:55.057 "name": "Nvme$subsystem", 00:17:55.057 "trtype": "$TEST_TRANSPORT", 00:17:55.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:55.057 "adrfam": "ipv4", 00:17:55.057 "trsvcid": "$NVMF_PORT", 00:17:55.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:55.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:55.057 "hdgst": ${hdgst:-false}, 00:17:55.057 "ddgst": ${ddgst:-false} 00:17:55.057 }, 00:17:55.057 "method": "bdev_nvme_attach_controller" 00:17:55.057 } 00:17:55.057 EOF 00:17:55.057 )") 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:55.057 { 00:17:55.057 "params": { 00:17:55.057 "name": "Nvme$subsystem", 00:17:55.057 "trtype": "$TEST_TRANSPORT", 00:17:55.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:55.057 "adrfam": "ipv4", 00:17:55.057 "trsvcid": "$NVMF_PORT", 00:17:55.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:55.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:55.057 "hdgst": ${hdgst:-false}, 00:17:55.057 "ddgst": ${ddgst:-false} 00:17:55.057 }, 00:17:55.057 "method": "bdev_nvme_attach_controller" 00:17:55.057 } 00:17:55.057 EOF 00:17:55.057 )") 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:55.057 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:55.057 { 00:17:55.057 "params": { 00:17:55.057 "name": "Nvme$subsystem", 00:17:55.057 "trtype": "$TEST_TRANSPORT", 00:17:55.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:55.057 "adrfam": "ipv4", 00:17:55.057 "trsvcid": "$NVMF_PORT", 00:17:55.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:55.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:55.057 "hdgst": ${hdgst:-false}, 00:17:55.057 "ddgst": ${ddgst:-false} 00:17:55.057 }, 00:17:55.057 "method": "bdev_nvme_attach_controller" 00:17:55.057 } 00:17:55.058 EOF 00:17:55.058 )") 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:55.058 { 00:17:55.058 "params": { 00:17:55.058 "name": "Nvme$subsystem", 00:17:55.058 "trtype": "$TEST_TRANSPORT", 00:17:55.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:55.058 "adrfam": "ipv4", 00:17:55.058 "trsvcid": "$NVMF_PORT", 00:17:55.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:55.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:55.058 "hdgst": ${hdgst:-false}, 00:17:55.058 "ddgst": ${ddgst:-false} 00:17:55.058 }, 00:17:55.058 "method": "bdev_nvme_attach_controller" 00:17:55.058 } 00:17:55.058 EOF 00:17:55.058 )") 00:17:55.058 [2024-11-27 12:56:21.243334] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:17:55.058 [2024-11-27 12:56:21.243385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4187687 ] 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:55.058 { 00:17:55.058 "params": { 00:17:55.058 "name": "Nvme$subsystem", 00:17:55.058 "trtype": "$TEST_TRANSPORT", 00:17:55.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:55.058 "adrfam": "ipv4", 00:17:55.058 "trsvcid": "$NVMF_PORT", 00:17:55.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:55.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:55.058 "hdgst": ${hdgst:-false}, 00:17:55.058 "ddgst": ${ddgst:-false} 00:17:55.058 }, 00:17:55.058 "method": "bdev_nvme_attach_controller" 00:17:55.058 } 00:17:55.058 EOF 00:17:55.058 )") 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:55.058 { 00:17:55.058 "params": { 00:17:55.058 "name": "Nvme$subsystem", 00:17:55.058 "trtype": "$TEST_TRANSPORT", 00:17:55.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:55.058 "adrfam": "ipv4", 00:17:55.058 "trsvcid": "$NVMF_PORT", 00:17:55.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:55.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:55.058 "hdgst": ${hdgst:-false}, 00:17:55.058 "ddgst": ${ddgst:-false} 00:17:55.058 }, 00:17:55.058 "method": "bdev_nvme_attach_controller" 00:17:55.058 } 00:17:55.058 EOF 00:17:55.058 )") 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:55.058 { 00:17:55.058 "params": { 00:17:55.058 "name": "Nvme$subsystem", 00:17:55.058 "trtype": "$TEST_TRANSPORT", 00:17:55.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:55.058 "adrfam": "ipv4", 00:17:55.058 "trsvcid": "$NVMF_PORT", 00:17:55.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:55.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:55.058 "hdgst": ${hdgst:-false}, 00:17:55.058 "ddgst": ${ddgst:-false} 00:17:55.058 }, 00:17:55.058 "method": "bdev_nvme_attach_controller" 00:17:55.058 } 00:17:55.058 EOF 00:17:55.058 )") 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:55.058 { 00:17:55.058 "params": { 00:17:55.058 "name": "Nvme$subsystem", 00:17:55.058 "trtype": "$TEST_TRANSPORT", 00:17:55.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:55.058 "adrfam": "ipv4", 00:17:55.058 "trsvcid": "$NVMF_PORT", 00:17:55.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:55.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:55.058 "hdgst": ${hdgst:-false}, 00:17:55.058 "ddgst": ${ddgst:-false} 00:17:55.058 }, 00:17:55.058 "method": "bdev_nvme_attach_controller" 00:17:55.058 } 00:17:55.058 EOF 00:17:55.058 )") 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:17:55.058 12:56:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:55.058 "params": { 00:17:55.058 "name": "Nvme1", 00:17:55.058 "trtype": "rdma", 00:17:55.058 "traddr": "192.168.100.8", 00:17:55.058 "adrfam": "ipv4", 00:17:55.058 "trsvcid": "4420", 00:17:55.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:55.058 "hdgst": false, 00:17:55.058 "ddgst": false 00:17:55.058 }, 00:17:55.058 "method": "bdev_nvme_attach_controller" 00:17:55.058 },{ 00:17:55.058 "params": { 00:17:55.058 "name": "Nvme2", 00:17:55.058 "trtype": "rdma", 00:17:55.058 "traddr": "192.168.100.8", 00:17:55.058 "adrfam": "ipv4", 00:17:55.058 "trsvcid": "4420", 00:17:55.058 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:55.058 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:55.058 "hdgst": false, 00:17:55.058 "ddgst": false 00:17:55.058 }, 00:17:55.058 "method": "bdev_nvme_attach_controller" 00:17:55.058 },{ 00:17:55.059 "params": { 00:17:55.059 "name": "Nvme3", 00:17:55.059 "trtype": "rdma", 00:17:55.059 "traddr": "192.168.100.8", 00:17:55.059 "adrfam": "ipv4", 00:17:55.059 "trsvcid": "4420", 00:17:55.059 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:55.059 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:55.059 "hdgst": false, 00:17:55.059 "ddgst": false 00:17:55.059 }, 00:17:55.059 "method": "bdev_nvme_attach_controller" 00:17:55.059 },{ 00:17:55.059 "params": { 00:17:55.059 "name": "Nvme4", 00:17:55.059 "trtype": "rdma", 00:17:55.059 "traddr": "192.168.100.8", 00:17:55.059 "adrfam": "ipv4", 00:17:55.059 "trsvcid": "4420", 00:17:55.059 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:55.059 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:55.059 "hdgst": false, 00:17:55.059 "ddgst": false 00:17:55.059 }, 00:17:55.059 "method": "bdev_nvme_attach_controller" 00:17:55.059 },{ 00:17:55.059 "params": { 00:17:55.059 "name": "Nvme5", 00:17:55.059 "trtype": "rdma", 00:17:55.059 "traddr": "192.168.100.8", 00:17:55.059 "adrfam": "ipv4", 00:17:55.059 "trsvcid": "4420", 00:17:55.059 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:55.059 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:55.059 "hdgst": false, 00:17:55.059 "ddgst": false 00:17:55.059 }, 00:17:55.059 "method": "bdev_nvme_attach_controller" 00:17:55.059 },{ 00:17:55.059 "params": { 00:17:55.059 "name": "Nvme6", 00:17:55.059 "trtype": "rdma", 00:17:55.059 "traddr": "192.168.100.8", 00:17:55.059 "adrfam": "ipv4", 00:17:55.059 "trsvcid": "4420", 00:17:55.059 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:55.059 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:55.059 "hdgst": false, 00:17:55.059 "ddgst": false 00:17:55.059 }, 00:17:55.059 "method": "bdev_nvme_attach_controller" 00:17:55.059 },{ 00:17:55.059 "params": { 00:17:55.059 "name": "Nvme7", 00:17:55.059 "trtype": "rdma", 00:17:55.059 "traddr": "192.168.100.8", 00:17:55.059 "adrfam": "ipv4", 00:17:55.059 "trsvcid": "4420", 00:17:55.059 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:55.059 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:55.059 "hdgst": false, 00:17:55.059 "ddgst": false 00:17:55.059 }, 00:17:55.059 "method": "bdev_nvme_attach_controller" 00:17:55.059 },{ 00:17:55.059 "params": { 00:17:55.059 "name": "Nvme8", 00:17:55.059 "trtype": "rdma", 00:17:55.059 "traddr": "192.168.100.8", 00:17:55.059 "adrfam": "ipv4", 00:17:55.059 "trsvcid": "4420", 00:17:55.059 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:55.059 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:55.059 "hdgst": false, 00:17:55.059 "ddgst": false 00:17:55.059 }, 00:17:55.059 "method": "bdev_nvme_attach_controller" 00:17:55.059 },{ 00:17:55.059 "params": { 00:17:55.059 "name": "Nvme9", 00:17:55.059 "trtype": "rdma", 00:17:55.059 "traddr": "192.168.100.8", 00:17:55.059 "adrfam": "ipv4", 00:17:55.059 "trsvcid": "4420", 00:17:55.059 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:55.059 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:55.059 "hdgst": false, 00:17:55.059 "ddgst": false 00:17:55.059 }, 00:17:55.059 "method": "bdev_nvme_attach_controller" 00:17:55.059 },{ 00:17:55.059 "params": { 00:17:55.059 "name": "Nvme10", 00:17:55.059 "trtype": "rdma", 00:17:55.059 "traddr": "192.168.100.8", 00:17:55.059 "adrfam": "ipv4", 00:17:55.059 "trsvcid": "4420", 00:17:55.059 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:55.059 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:55.059 "hdgst": false, 00:17:55.059 "ddgst": false 00:17:55.059 }, 00:17:55.059 "method": "bdev_nvme_attach_controller" 00:17:55.059 }' 00:17:55.059 [2024-11-27 12:56:21.334441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.059 [2024-11-27 12:56:21.374242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.994 Running I/O for 10 seconds... 00:17:55.994 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.994 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:17:55.994 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:55.994 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.994 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=20 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 20 -ge 100 ']' 00:17:56.253 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:17:56.512 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:17:56.512 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:17:56.512 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:17:56.512 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:56.512 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.512 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:56.771 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.771 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=172 00:17:56.771 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 172 -ge 100 ']' 00:17:56.771 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:17:56.771 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:17:56.771 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:17:56.771 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 4187374 00:17:56.771 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 4187374 ']' 00:17:56.771 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 4187374 00:17:56.771 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:17:56.771 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.771 12:56:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4187374 00:17:56.771 12:56:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:56.771 12:56:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:56.771 12:56:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4187374' 00:17:56.771 killing process with pid 4187374 00:17:56.771 12:56:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 4187374 00:17:56.771 12:56:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 4187374 00:17:57.288 2701.00 IOPS, 168.81 MiB/s [2024-11-27T11:56:23.673Z] 12:56:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:17:57.860 [2024-11-27 12:56:24.059015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.860 [2024-11-27 12:56:24.059057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.860 [2024-11-27 12:56:24.059069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.059078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.059087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.059095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.059104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.059113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.061789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:57.861 [2024-11-27 12:56:24.061838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:17:57.861 [2024-11-27 12:56:24.061896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.061929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.061962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.061992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.062024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.062055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.062086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.062116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.064736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:57.861 [2024-11-27 12:56:24.064779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:17:57.861 [2024-11-27 12:56:24.064833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.064866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.064898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.064928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.064959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.064990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.065021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.065051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.067419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:57.861 [2024-11-27 12:56:24.067460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:17:57.861 [2024-11-27 12:56:24.067496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.067510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.067522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.067542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.067555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.067567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.067580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.067592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.069921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:57.861 [2024-11-27 12:56:24.069963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:17:57.861 [2024-11-27 12:56:24.070025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.070039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.070052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.070065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.070077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.070089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.070102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.070114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.072461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:57.861 [2024-11-27 12:56:24.072501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:17:57.861 [2024-11-27 12:56:24.072549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.072581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.072627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.072658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.072689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.072718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.072750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.072779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.075337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:57.861 [2024-11-27 12:56:24.075378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:17:57.861 [2024-11-27 12:56:24.075430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.075462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.075495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.075524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.075555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.075585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.075627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.075658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.078229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:57.861 [2024-11-27 12:56:24.078269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:57.861 [2024-11-27 12:56:24.078318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.078351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.078383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.078412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.078443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.078473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.078505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.861 [2024-11-27 12:56:24.078534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.861 [2024-11-27 12:56:24.081033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:57.862 [2024-11-27 12:56:24.081073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:17:57.862 [2024-11-27 12:56:24.082947] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:17:57.862 [2024-11-27 12:56:24.085539] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:17:57.862 [2024-11-27 12:56:24.087978] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:17:57.862 [2024-11-27 12:56:24.090504] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:17:57.862 [2024-11-27 12:56:24.093060] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:17:57.862 [2024-11-27 12:56:24.095421] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:17:57.862 [2024-11-27 12:56:24.097422] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:17:57.862 [2024-11-27 12:56:24.097483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026ff880 len:0x10000 key:0x184c00 00:17:57.862 [2024-11-27 12:56:24.097499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026ef800 len:0x10000 key:0x184c00 00:17:57.862 [2024-11-27 12:56:24.097532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026df780 len:0x10000 key:0x184c00 00:17:57.862 [2024-11-27 12:56:24.097559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026cf700 len:0x10000 key:0x184c00 00:17:57.862 [2024-11-27 12:56:24.097586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026bf680 len:0x10000 key:0x184c00 00:17:57.862 [2024-11-27 12:56:24.097618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026af600 len:0x10000 key:0x184c00 00:17:57.862 [2024-11-27 12:56:24.097645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100269f580 len:0x10000 key:0x184c00 00:17:57.862 [2024-11-27 12:56:24.097672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100268f500 len:0x10000 key:0x184c00 00:17:57.862 [2024-11-27 12:56:24.097699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100267f480 len:0x10000 key:0x184c00 00:17:57.862 [2024-11-27 12:56:24.097725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100266f400 len:0x10000 key:0x184c00 00:17:57.862 [2024-11-27 12:56:24.097756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100265f380 len:0x10000 key:0x184c00 00:17:57.862 [2024-11-27 12:56:24.097783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100264f300 len:0x10000 key:0x184c00 00:17:57.862 [2024-11-27 12:56:24.097809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100263f280 len:0x10000 key:0x184c00 00:17:57.862 [2024-11-27 12:56:24.097836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100262f200 len:0x10000 key:0x184c00 00:17:57.862 [2024-11-27 12:56:24.097862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100261f180 len:0x10000 key:0x184c00 00:17:57.862 [2024-11-27 12:56:24.097889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100260f100 len:0x10000 key:0x184c00 00:17:57.862 [2024-11-27 12:56:24.097915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029f0000 len:0x10000 key:0x183c00 00:17:57.862 [2024-11-27 12:56:24.097941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029dff80 len:0x10000 key:0x183c00 00:17:57.862 [2024-11-27 12:56:24.097968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.097982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029cff00 len:0x10000 key:0x183c00 00:17:57.862 [2024-11-27 12:56:24.097994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.098009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029bfe80 len:0x10000 key:0x183c00 00:17:57.862 [2024-11-27 12:56:24.098021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.098035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029afe00 len:0x10000 key:0x183c00 00:17:57.862 [2024-11-27 12:56:24.098047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.098063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008959000 len:0x10000 key:0x183900 00:17:57.862 [2024-11-27 12:56:24.098076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.098091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008938000 len:0x10000 key:0x183900 00:17:57.862 [2024-11-27 12:56:24.098103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.098117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008917000 len:0x10000 key:0x183900 00:17:57.862 [2024-11-27 12:56:24.098130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.098144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000088f6000 len:0x10000 key:0x183900 00:17:57.862 [2024-11-27 12:56:24.098156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.098170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000088d5000 len:0x10000 key:0x183900 00:17:57.862 [2024-11-27 12:56:24.098183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.098197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec7a000 len:0x10000 key:0x183900 00:17:57.862 [2024-11-27 12:56:24.098209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.098223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec59000 len:0x10000 key:0x183900 00:17:57.862 [2024-11-27 12:56:24.098235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.098250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec38000 len:0x10000 key:0x183900 00:17:57.862 [2024-11-27 12:56:24.098262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.098276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec17000 len:0x10000 key:0x183900 00:17:57.862 [2024-11-27 12:56:24.098289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.098303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebf6000 len:0x10000 key:0x183900 00:17:57.862 [2024-11-27 12:56:24.098315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.098329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebd5000 len:0x10000 key:0x183900 00:17:57.862 [2024-11-27 12:56:24.098342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.862 [2024-11-27 12:56:24.098357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebb4000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb93000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb72000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb51000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb30000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df12000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000def1000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ded0000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3df000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3be000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c39d000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c37c000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c35b000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c33a000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c319000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2f8000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2d7000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2b6000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c295000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c274000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c253000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f13f000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.098975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.098990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f11e000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.099003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.099018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0fd000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.099030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.099045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0dc000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.099057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.099072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0bb000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.099084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.099100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f09a000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.099112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.099126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7ff000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.099138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.099153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7de000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.099166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.099180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7bd000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.099192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.099206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c79c000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.099219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.099233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c77b000 len:0x10000 key:0x183900 00:17:57.863 [2024-11-27 12:56:24.099245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.101369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002adf780 len:0x10000 key:0x184d00 00:17:57.863 [2024-11-27 12:56:24.101387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.101405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002acf700 len:0x10000 key:0x184d00 00:17:57.863 [2024-11-27 12:56:24.101419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.101437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002abf680 len:0x10000 key:0x184d00 00:17:57.863 [2024-11-27 12:56:24.101449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.101463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aaf600 len:0x10000 key:0x184d00 00:17:57.863 [2024-11-27 12:56:24.101476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.863 [2024-11-27 12:56:24.101490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a9f580 len:0x10000 key:0x184d00 00:17:57.864 [2024-11-27 12:56:24.101502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a8f500 len:0x10000 key:0x184d00 00:17:57.864 [2024-11-27 12:56:24.101530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a7f480 len:0x10000 key:0x184d00 00:17:57.864 [2024-11-27 12:56:24.101556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a6f400 len:0x10000 key:0x184d00 00:17:57.864 [2024-11-27 12:56:24.101584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a5f380 len:0x10000 key:0x184d00 00:17:57.864 [2024-11-27 12:56:24.101620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a4f300 len:0x10000 key:0x184d00 00:17:57.864 [2024-11-27 12:56:24.101647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a3f280 len:0x10000 key:0x184d00 00:17:57.864 [2024-11-27 12:56:24.101674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a2f200 len:0x10000 key:0x184d00 00:17:57.864 [2024-11-27 12:56:24.101701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a1f180 len:0x10000 key:0x184d00 00:17:57.864 [2024-11-27 12:56:24.101728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a0f100 len:0x10000 key:0x184d00 00:17:57.864 [2024-11-27 12:56:24.101757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002df0000 len:0x10000 key:0x182d00 00:17:57.864 [2024-11-27 12:56:24.101784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ddff80 len:0x10000 key:0x182d00 00:17:57.864 [2024-11-27 12:56:24.101810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f55f000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.101837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f53e000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.101864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f51d000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.101890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4fc000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.101917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4db000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.101943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4ba000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.101970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.101985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f499000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.101997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.102011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f478000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.102024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.102039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f457000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.102052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.102066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f436000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.102079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.102094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f415000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.102105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.102120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f3f4000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.102132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.102147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f3d3000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.102159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.102173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f3b2000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.102185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.102200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f391000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.102212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.102226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f370000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.102238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.102253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9a8000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.102265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.102280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9c9000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.102292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.102306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9ea000 len:0x10000 key:0x183900 00:17:57.864 [2024-11-27 12:56:24.102319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.864 [2024-11-27 12:56:24.102333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da0b000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da2c000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da4d000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8a0000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8c1000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8e2000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d903000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d924000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d945000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da6e000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da8f000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6f7000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6d6000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca51000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca30000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f34f000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f32e000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f30d000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc7e000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc9f000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc1f000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbfe000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbdd000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.102985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbbc000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.102998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.103012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb9b000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.103024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.103038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb7a000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.103050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.103064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb59000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.103076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.103091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb38000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.103102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.103117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb17000 len:0x10000 key:0x183900 00:17:57.865 [2024-11-27 12:56:24.103129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.865 [2024-11-27 12:56:24.105173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ebf680 len:0x10000 key:0x184800 00:17:57.866 [2024-11-27 12:56:24.105191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eaf600 len:0x10000 key:0x184800 00:17:57.866 [2024-11-27 12:56:24.105230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e9f580 len:0x10000 key:0x184800 00:17:57.866 [2024-11-27 12:56:24.105260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e8f500 len:0x10000 key:0x184800 00:17:57.866 [2024-11-27 12:56:24.105289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e7f480 len:0x10000 key:0x184800 00:17:57.866 [2024-11-27 12:56:24.105319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e6f400 len:0x10000 key:0x184800 00:17:57.866 [2024-11-27 12:56:24.105352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e5f380 len:0x10000 key:0x184800 00:17:57.866 [2024-11-27 12:56:24.105381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e4f300 len:0x10000 key:0x184800 00:17:57.866 [2024-11-27 12:56:24.105412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e3f280 len:0x10000 key:0x184800 00:17:57.866 [2024-11-27 12:56:24.105442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e2f200 len:0x10000 key:0x184800 00:17:57.866 [2024-11-27 12:56:24.105472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e1f180 len:0x10000 key:0x184800 00:17:57.866 [2024-11-27 12:56:24.105501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e0f100 len:0x10000 key:0x184800 00:17:57.866 [2024-11-27 12:56:24.105531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031f0000 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.105561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031dff80 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.105590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031cff00 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.105626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031bfe80 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.105655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031afe00 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.105690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100319fd80 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.105720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100318fd00 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.105750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100317fc80 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.105779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100316fc00 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.105809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100315fb80 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.105838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100314fb00 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.105867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100313fa80 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.105897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100312fa00 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.105926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100311f980 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.105955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.105973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100310f900 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.105985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.106002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ff880 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.106014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.106033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ef800 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.106046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.106063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030df780 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.106075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.106092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030cf700 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.106105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.106122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030bf680 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.106134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.866 [2024-11-27 12:56:24.106151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030af600 len:0x10000 key:0x184b00 00:17:57.866 [2024-11-27 12:56:24.106164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100309f580 len:0x10000 key:0x184b00 00:17:57.867 [2024-11-27 12:56:24.106193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100308f500 len:0x10000 key:0x184b00 00:17:57.867 [2024-11-27 12:56:24.106222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100307f480 len:0x10000 key:0x184b00 00:17:57.867 [2024-11-27 12:56:24.106251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100306f400 len:0x10000 key:0x184b00 00:17:57.867 [2024-11-27 12:56:24.106281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100305f380 len:0x10000 key:0x184b00 00:17:57.867 [2024-11-27 12:56:24.106310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100304f300 len:0x10000 key:0x184b00 00:17:57.867 [2024-11-27 12:56:24.106340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100303f280 len:0x10000 key:0x184b00 00:17:57.867 [2024-11-27 12:56:24.106371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100302f200 len:0x10000 key:0x184b00 00:17:57.867 [2024-11-27 12:56:24.106402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100301f180 len:0x10000 key:0x184b00 00:17:57.867 [2024-11-27 12:56:24.106431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100300f100 len:0x10000 key:0x184b00 00:17:57.867 [2024-11-27 12:56:24.106461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033f0000 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.106490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033dff80 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.106520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033cff00 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.106549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033bfe80 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.106579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033afe00 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.106626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100339fd80 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.106656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100338fd00 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.106685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.106703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100337fc80 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.106717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.114303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100336fc00 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.114336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.114357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100335fb80 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.114371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.114388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100334fb00 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.114401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.114419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100333fa80 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.114431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.114448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100332fa00 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.114461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.114478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100331f980 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.114491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.114508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100330f900 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.114521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.114538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ff880 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.114550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.114568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ef800 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.114580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.114599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032df780 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.114635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.114653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032cf700 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.114671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.114688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032bf680 len:0x10000 key:0x184500 00:17:57.867 [2024-11-27 12:56:24.114701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.114718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ecf700 len:0x10000 key:0x184800 00:17:57.867 [2024-11-27 12:56:24.114730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b8941000 sqhd:7210 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.117673] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:17:57.867 [2024-11-27 12:56:24.117776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.867 [2024-11-27 12:56:24.117799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.117813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.867 [2024-11-27 12:56:24.117825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.117838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.867 [2024-11-27 12:56:24.117851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.117864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.867 [2024-11-27 12:56:24.117876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.867 [2024-11-27 12:56:24.119765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:57.867 [2024-11-27 12:56:24.119783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:17:57.868 [2024-11-27 12:56:24.119796] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.119816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.868 [2024-11-27 12:56:24.119829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.868 [2024-11-27 12:56:24.119842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.868 [2024-11-27 12:56:24.119854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.868 [2024-11-27 12:56:24.119866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.868 [2024-11-27 12:56:24.119878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.868 [2024-11-27 12:56:24.119891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.868 [2024-11-27 12:56:24.119902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32731 cdw0:1 sqhd:0990 p:0 m:0 dnr:0 00:17:57.868 [2024-11-27 12:56:24.139003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:57.868 [2024-11-27 12:56:24.139055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:17:57.868 [2024-11-27 12:56:24.139087] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.139131] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.139173] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.139218] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.139257] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.139300] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.139344] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.139385] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.139428] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.141946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:57.868 [2024-11-27 12:56:24.141967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:17:57.868 [2024-11-27 12:56:24.141977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:17:57.868 [2024-11-27 12:56:24.142019] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.142032] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.142045] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.142056] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.142068] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.142081] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.142095] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:17:57.868 [2024-11-27 12:56:24.142392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:17:57.868 [2024-11-27 12:56:24.142405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:17:57.868 [2024-11-27 12:56:24.142415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:17:57.868 [2024-11-27 12:56:24.142425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:17:57.868 [2024-11-27 12:56:24.142438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:17:57.868 [2024-11-27 12:56:24.142451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:17:57.868 [2024-11-27 12:56:24.142994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:17:57.868 task offset: 40960 on job bdev=Nvme7n1 fails 00:17:57.868 00:17:57.868 Latency(us) 00:17:57.868 [2024-11-27T11:56:24.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.868 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:57.868 Job: Nvme1n1 ended in about 1.90 seconds with error 00:17:57.868 Verification LBA range: start 0x0 length 0x400 00:17:57.868 Nvme1n1 : 1.90 143.59 8.97 33.66 0.00 356561.38 6003.10 1046898.28 00:17:57.868 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:57.868 Job: Nvme2n1 ended in about 1.90 seconds with error 00:17:57.868 Verification LBA range: start 0x0 length 0x400 00:17:57.868 Nvme2n1 : 1.90 134.56 8.41 33.64 0.00 372185.50 55364.81 1046898.28 00:17:57.868 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:57.868 Job: Nvme3n1 ended in about 1.90 seconds with error 00:17:57.868 Verification LBA range: start 0x0 length 0x400 00:17:57.868 Nvme3n1 : 1.90 151.31 9.46 33.62 0.00 335585.88 8231.32 1046898.28 00:17:57.868 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:57.868 Job: Nvme4n1 ended in about 1.90 seconds with error 00:17:57.868 Verification LBA range: start 0x0 length 0x400 00:17:57.868 Nvme4n1 : 1.90 154.91 9.68 33.61 0.00 326349.13 4168.09 1046898.28 00:17:57.868 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:57.868 Job: Nvme5n1 ended in about 1.91 seconds with error 00:17:57.868 Verification LBA range: start 0x0 length 0x400 00:17:57.868 Nvme5n1 : 1.91 142.76 8.92 33.59 0.00 345882.16 24851.25 1046898.28 00:17:57.868 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:57.868 Job: Nvme6n1 ended in about 1.91 seconds with error 00:17:57.868 Verification LBA range: start 0x0 length 0x400 00:17:57.868 Nvme6n1 : 1.91 151.09 9.44 33.58 0.00 327563.23 27472.69 1046898.28 00:17:57.868 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:57.868 Job: Nvme7n1 ended in about 1.91 seconds with error 00:17:57.868 Verification LBA range: start 0x0 length 0x400 00:17:57.868 Nvme7n1 : 1.91 151.02 9.44 33.56 0.00 324515.21 38377.88 1046898.28 00:17:57.868 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:57.868 Job: Nvme8n1 ended in about 1.88 seconds with error 00:17:57.868 Verification LBA range: start 0x0 length 0x400 00:17:57.868 Nvme8n1 : 1.88 146.97 9.19 33.96 0.00 331457.64 42991.62 1073741.82 00:17:57.868 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:57.868 Job: Nvme9n1 ended in about 1.89 seconds with error 00:17:57.868 Verification LBA range: start 0x0 length 0x400 00:17:57.868 Nvme9n1 : 1.89 144.25 9.02 33.94 0.00 333680.60 39007.03 1067030.94 00:17:57.868 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:57.868 Job: Nvme10n1 ended in about 1.86 seconds with error 00:17:57.868 Verification LBA range: start 0x0 length 0x400 00:17:57.868 Nvme10n1 : 1.86 103.27 6.45 34.42 0.00 425307.34 62914.56 1067030.94 00:17:57.868 [2024-11-27T11:56:24.253Z] =================================================================================================================== 00:17:57.868 [2024-11-27T11:56:24.253Z] Total : 1423.74 88.98 337.58 0.00 345500.31 4168.09 1073741.82 00:17:57.868 [2024-11-27 12:56:24.190106] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:57.868 [2024-11-27 12:56:24.192797] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:57.868 [2024-11-27 12:56:24.192873] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:57.868 [2024-11-27 12:56:24.192909] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:17:57.868 [2024-11-27 12:56:24.193029] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:57.868 [2024-11-27 12:56:24.193063] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:57.868 [2024-11-27 12:56:24.193088] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170e5300 00:17:57.868 [2024-11-27 12:56:24.193209] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:57.868 [2024-11-27 12:56:24.193243] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:57.868 [2024-11-27 12:56:24.193266] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170d9c80 00:17:57.868 [2024-11-27 12:56:24.193393] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:57.869 [2024-11-27 12:56:24.193404] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:57.869 [2024-11-27 12:56:24.193411] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170bf1c0 00:17:57.869 [2024-11-27 12:56:24.193518] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:57.869 [2024-11-27 12:56:24.193528] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:57.869 [2024-11-27 12:56:24.193535] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001709b1c0 00:17:57.869 [2024-11-27 12:56:24.193604] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:57.869 [2024-11-27 12:56:24.193618] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:57.869 [2024-11-27 12:56:24.193625] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170a8500 00:17:57.869 [2024-11-27 12:56:24.193703] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:57.869 [2024-11-27 12:56:24.193713] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:57.869 [2024-11-27 12:56:24.193720] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170c5040 00:17:57.869 [2024-11-27 12:56:24.193847] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:57.869 [2024-11-27 12:56:24.193857] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:57.869 [2024-11-27 12:56:24.193864] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170c6340 00:17:57.869 [2024-11-27 12:56:24.193982] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:57.869 [2024-11-27 12:56:24.193992] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:57.869 [2024-11-27 12:56:24.193999] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170d2900 00:17:57.869 [2024-11-27 12:56:24.194368] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:57.869 [2024-11-27 12:56:24.194380] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:57.869 [2024-11-27 12:56:24.194387] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001708e080 00:17:58.127 12:56:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 4187687 00:17:58.127 12:56:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:17:58.127 12:56:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4187687 00:17:58.127 12:56:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:17:58.127 12:56:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.127 12:56:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:17:58.385 12:56:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.385 12:56:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 4187687 00:17:58.952 [2024-11-27 12:56:25.197786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:58.952 [2024-11-27 12:56:25.197845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:58.952 [2024-11-27 12:56:25.199434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:58.952 [2024-11-27 12:56:25.199477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:17:58.952 [2024-11-27 12:56:25.201171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:58.952 [2024-11-27 12:56:25.201214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:17:58.952 [2024-11-27 12:56:25.202912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:58.952 [2024-11-27 12:56:25.202953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:17:58.952 [2024-11-27 12:56:25.204581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:58.952 [2024-11-27 12:56:25.204634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:17:58.952 [2024-11-27 12:56:25.206425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:58.952 [2024-11-27 12:56:25.206465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:17:58.952 [2024-11-27 12:56:25.207978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:58.952 [2024-11-27 12:56:25.208022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:17:58.952 [2024-11-27 12:56:25.209375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:58.952 [2024-11-27 12:56:25.209415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:17:58.952 [2024-11-27 12:56:25.210824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:58.952 [2024-11-27 12:56:25.210877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:17:58.952 [2024-11-27 12:56:25.212213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:58.952 [2024-11-27 12:56:25.212253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:17:58.952 [2024-11-27 12:56:25.212288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:17:58.952 [2024-11-27 12:56:25.212318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:17:58.952 [2024-11-27 12:56:25.212348] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:17:58.953 [2024-11-27 12:56:25.212380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:17:58.953 [2024-11-27 12:56:25.212419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:17:58.953 [2024-11-27 12:56:25.212448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:17:58.953 [2024-11-27 12:56:25.212477] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:17:58.953 [2024-11-27 12:56:25.212505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:17:58.953 [2024-11-27 12:56:25.212541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:17:58.953 [2024-11-27 12:56:25.212569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:17:58.953 [2024-11-27 12:56:25.212597] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:17:58.953 [2024-11-27 12:56:25.212639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:17:58.953 [2024-11-27 12:56:25.212820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:17:58.953 [2024-11-27 12:56:25.212835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:17:58.953 [2024-11-27 12:56:25.212846] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:17:58.953 [2024-11-27 12:56:25.212858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:17:58.953 [2024-11-27 12:56:25.212872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:17:58.953 [2024-11-27 12:56:25.212883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:17:58.953 [2024-11-27 12:56:25.212895] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:17:58.953 [2024-11-27 12:56:25.212907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:17:58.953 [2024-11-27 12:56:25.212920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:17:58.953 [2024-11-27 12:56:25.212932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:17:58.953 [2024-11-27 12:56:25.212943] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:17:58.953 [2024-11-27 12:56:25.212955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:17:58.953 [2024-11-27 12:56:25.212969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:17:58.953 [2024-11-27 12:56:25.212980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:17:58.953 [2024-11-27 12:56:25.212991] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:17:58.953 [2024-11-27 12:56:25.213002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:17:58.953 [2024-11-27 12:56:25.213017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:17:58.953 [2024-11-27 12:56:25.213032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:17:58.953 [2024-11-27 12:56:25.213043] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:17:58.953 [2024-11-27 12:56:25.213055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:17:58.953 [2024-11-27 12:56:25.213069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:17:58.953 [2024-11-27 12:56:25.213080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:17:58.953 [2024-11-27 12:56:25.213092] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:17:58.953 [2024-11-27 12:56:25.213104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:17:58.953 [2024-11-27 12:56:25.213117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:17:58.953 [2024-11-27 12:56:25.213129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:17:58.953 [2024-11-27 12:56:25.213140] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:17:58.953 [2024-11-27 12:56:25.213152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:59.213 rmmod nvme_rdma 00:17:59.213 rmmod nvme_fabrics 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 4187374 ']' 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 4187374 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 4187374 ']' 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 4187374 00:17:59.213 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4187374) - No such process 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 4187374 is not found' 00:17:59.213 Process with pid 4187374 is not found 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:59.213 00:17:59.213 real 0m6.200s 00:17:59.213 user 0m18.864s 00:17:59.213 sys 0m1.495s 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:59.213 ************************************ 00:17:59.213 END TEST nvmf_shutdown_tc3 00:17:59.213 ************************************ 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:59.213 ************************************ 00:17:59.213 START TEST nvmf_shutdown_tc4 00:17:59.213 ************************************ 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:59.213 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:59.213 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:59.213 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:59.213 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:59.213 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:59.473 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:59.473 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:59.473 altname enp217s0f0np0 00:17:59.473 altname ens818f0np0 00:17:59.473 inet 192.168.100.8/24 scope global mlx_0_0 00:17:59.473 valid_lft forever preferred_lft forever 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:59.473 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:59.473 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:59.473 altname enp217s0f1np1 00:17:59.473 altname ens818f1np1 00:17:59.473 inet 192.168.100.9/24 scope global mlx_0_1 00:17:59.473 valid_lft forever preferred_lft forever 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:59.473 192.168.100.9' 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:59.473 192.168.100.9' 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:59.473 192.168.100.9' 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:59.473 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:59.474 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:59.474 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:59.474 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:17:59.474 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:59.474 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.474 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:59.474 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=4188606 00:17:59.474 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 4188606 00:17:59.474 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:59.474 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 4188606 ']' 00:17:59.474 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.474 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.474 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.474 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.474 12:56:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:59.731 [2024-11-27 12:56:25.875564] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:17:59.731 [2024-11-27 12:56:25.875643] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.731 [2024-11-27 12:56:25.966681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:59.731 [2024-11-27 12:56:26.004907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.731 [2024-11-27 12:56:26.004950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.731 [2024-11-27 12:56:26.004959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.731 [2024-11-27 12:56:26.004968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.731 [2024-11-27 12:56:26.004975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.731 [2024-11-27 12:56:26.006782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.731 [2024-11-27 12:56:26.006865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.731 [2024-11-27 12:56:26.006979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.731 [2024-11-27 12:56:26.006980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:00.662 [2024-11-27 12:56:26.796181] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8800f0/0x8845e0) succeed. 00:18:00.662 [2024-11-27 12:56:26.805386] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x881780/0x8c5c80) succeed. 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:00.662 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:00.663 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:18:00.663 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:18:00.663 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.663 12:56:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:00.663 Malloc1 00:18:00.663 [2024-11-27 12:56:27.045775] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:00.919 Malloc2 00:18:00.919 Malloc3 00:18:00.919 Malloc4 00:18:00.919 Malloc5 00:18:00.919 Malloc6 00:18:00.919 Malloc7 00:18:01.177 Malloc8 00:18:01.177 Malloc9 00:18:01.177 Malloc10 00:18:01.177 12:56:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.177 12:56:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:18:01.177 12:56:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.177 12:56:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:01.177 12:56:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=4188922 00:18:01.177 12:56:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:18:01.177 12:56:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:18:01.435 [2024-11-27 12:56:27.574962] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:06.701 12:56:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:06.701 12:56:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 4188606 00:18:06.701 12:56:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 4188606 ']' 00:18:06.701 12:56:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 4188606 00:18:06.701 12:56:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:18:06.701 12:56:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.701 12:56:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4188606 00:18:06.701 12:56:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:06.701 12:56:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:06.701 12:56:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4188606' 00:18:06.701 killing process with pid 4188606 00:18:06.701 12:56:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 4188606 00:18:06.701 12:56:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 4188606 00:18:06.701 NVMe io qpair process completion error 00:18:06.701 NVMe io qpair process completion error 00:18:06.701 NVMe io qpair process completion error 00:18:06.701 NVMe io qpair process completion error 00:18:06.701 NVMe io qpair process completion error 00:18:06.701 starting I/O failed: -6 00:18:06.701 starting I/O failed: -6 00:18:06.701 NVMe io qpair process completion error 00:18:06.701 NVMe io qpair process completion error 00:18:06.959 12:56:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 starting I/O failed: -6 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.527 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 [2024-11-27 12:56:33.651833] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.528 starting I/O failed: -6 00:18:07.528 Write completed with error (sct=0, sc=8) 00:18:07.529 starting I/O failed: -6 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 [2024-11-27 12:56:33.664118] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 starting I/O failed: -6 00:18:07.529 [2024-11-27 12:56:33.675125] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.529 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 [2024-11-27 12:56:33.687087] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.530 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 [2024-11-27 12:56:33.698523] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 starting I/O failed: -6 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.531 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 Write completed with error (sct=0, sc=8) 00:18:07.532 NVMe io qpair process completion error 00:18:07.532 NVMe io qpair process completion error 00:18:07.532 NVMe io qpair process completion error 00:18:07.532 NVMe io qpair process completion error 00:18:07.790 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 4188922 00:18:07.790 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:18:07.790 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4188922 00:18:07.790 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:18:07.790 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.790 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:18:07.790 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.790 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 4188922 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 [2024-11-27 12:56:34.716247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:08.359 [2024-11-27 12:56:34.716312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 [2024-11-27 12:56:34.718666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 [2024-11-27 12:56:34.718715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.359 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 [2024-11-27 12:56:34.726555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:08.360 [2024-11-27 12:56:34.726659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 [2024-11-27 12:56:34.738562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:08.360 [2024-11-27 12:56:34.738673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.360 Write completed with error (sct=0, sc=8) 00:18:08.361 Write completed with error (sct=0, sc=8) 00:18:08.361 Write completed with error (sct=0, sc=8) 00:18:08.361 Write completed with error (sct=0, sc=8) 00:18:08.361 Write completed with error (sct=0, sc=8) 00:18:08.361 Write completed with error (sct=0, sc=8) 00:18:08.361 Write completed with error (sct=0, sc=8) 00:18:08.361 Write completed with error (sct=0, sc=8) 00:18:08.361 Write completed with error (sct=0, sc=8) 00:18:08.361 Write completed with error (sct=0, sc=8) 00:18:08.361 Write completed with error (sct=0, sc=8) 00:18:08.361 Write completed with error (sct=0, sc=8) 00:18:08.361 Write completed with error (sct=0, sc=8) 00:18:08.361 [2024-11-27 12:56:34.741147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:08.361 [2024-11-27 12:56:34.741192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:08.620 Write completed with error (sct=0, sc=8) 00:18:08.620 Write completed with error (sct=0, sc=8) 00:18:08.620 Write completed with error (sct=0, sc=8) 00:18:08.620 Write completed with error (sct=0, sc=8) 00:18:08.620 Write completed with error (sct=0, sc=8) 00:18:08.620 Write completed with error (sct=0, sc=8) 00:18:08.620 Write completed with error (sct=0, sc=8) 00:18:08.620 Write completed with error (sct=0, sc=8) 00:18:08.620 Write completed with error (sct=0, sc=8) 00:18:08.620 Write completed with error (sct=0, sc=8) 00:18:08.620 Write completed with error (sct=0, sc=8) 00:18:08.620 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 [2024-11-27 12:56:34.743575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:08.621 [2024-11-27 12:56:34.743627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 [2024-11-27 12:56:34.745967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:08.621 [2024-11-27 12:56:34.746011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 [2024-11-27 12:56:34.748412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 [2024-11-27 12:56:34.748471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 [2024-11-27 12:56:34.750565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:08.621 [2024-11-27 12:56:34.750646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 Write completed with error (sct=0, sc=8) 00:18:08.621 [2024-11-27 12:56:34.789062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:08.621 [2024-11-27 12:56:34.789127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:18:08.621 Initializing NVMe Controllers 00:18:08.621 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:18:08.621 Controller IO queue size 128, less than required. 00:18:08.621 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.621 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:18:08.621 Controller IO queue size 128, less than required. 00:18:08.621 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.621 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:18:08.621 Controller IO queue size 128, less than required. 00:18:08.621 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.621 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:18:08.621 Controller IO queue size 128, less than required. 00:18:08.621 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.621 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:08.621 Controller IO queue size 128, less than required. 00:18:08.621 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.621 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:18:08.621 Controller IO queue size 128, less than required. 00:18:08.621 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.622 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:18:08.622 Controller IO queue size 128, less than required. 00:18:08.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.622 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:18:08.622 Controller IO queue size 128, less than required. 00:18:08.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.622 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:18:08.622 Controller IO queue size 128, less than required. 00:18:08.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.622 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:18:08.622 Controller IO queue size 128, less than required. 00:18:08.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.622 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:18:08.622 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:18:08.622 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:18:08.622 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:18:08.622 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:08.622 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:18:08.622 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:18:08.622 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:18:08.622 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:18:08.622 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:18:08.622 Initialization complete. Launching workers. 00:18:08.622 ======================================================== 00:18:08.622 Latency(us) 00:18:08.622 Device Information : IOPS MiB/s Average min max 00:18:08.622 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1560.43 67.05 81099.96 109.76 1189445.06 00:18:08.622 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1567.47 67.35 80843.18 116.66 1189291.04 00:18:08.622 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1574.68 67.66 94830.31 105.03 2198905.59 00:18:08.622 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1583.06 68.02 94433.97 110.66 2204349.21 00:18:08.622 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1553.39 66.75 81566.85 109.97 1223653.23 00:18:08.622 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1551.04 66.65 81792.80 114.94 1225788.30 00:18:08.622 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1560.09 67.04 81427.87 113.24 1212169.51 00:18:08.622 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1555.90 66.86 81514.43 109.54 1210952.75 00:18:08.622 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1611.22 69.23 92665.60 115.65 2060049.28 00:18:08.622 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1578.20 67.81 94685.71 114.11 2180591.71 00:18:08.622 ======================================================== 00:18:08.622 Total : 15695.49 674.42 86538.41 105.03 2204349.21 00:18:08.622 00:18:08.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:08.622 rmmod nvme_rdma 00:18:08.622 rmmod nvme_fabrics 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 4188606 ']' 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 4188606 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 4188606 ']' 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 4188606 00:18:08.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4188606) - No such process 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 4188606 is not found' 00:18:08.622 Process with pid 4188606 is not found 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:08.622 00:18:08.622 real 0m9.333s 00:18:08.622 user 0m34.685s 00:18:08.622 sys 0m1.504s 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:18:08.622 ************************************ 00:18:08.622 END TEST nvmf_shutdown_tc4 00:18:08.622 ************************************ 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:18:08.622 00:18:08.622 real 0m36.231s 00:18:08.622 user 1m45.398s 00:18:08.622 sys 0m12.068s 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:08.622 ************************************ 00:18:08.622 END TEST nvmf_shutdown 00:18:08.622 ************************************ 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:08.622 ************************************ 00:18:08.622 START TEST nvmf_nsid 00:18:08.622 ************************************ 00:18:08.622 12:56:34 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:18:08.882 * Looking for test storage... 00:18:08.882 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:08.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.882 --rc genhtml_branch_coverage=1 00:18:08.882 --rc genhtml_function_coverage=1 00:18:08.882 --rc genhtml_legend=1 00:18:08.882 --rc geninfo_all_blocks=1 00:18:08.882 --rc geninfo_unexecuted_blocks=1 00:18:08.882 00:18:08.882 ' 00:18:08.882 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:08.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.882 --rc genhtml_branch_coverage=1 00:18:08.882 --rc genhtml_function_coverage=1 00:18:08.882 --rc genhtml_legend=1 00:18:08.882 --rc geninfo_all_blocks=1 00:18:08.882 --rc geninfo_unexecuted_blocks=1 00:18:08.882 00:18:08.883 ' 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:08.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.883 --rc genhtml_branch_coverage=1 00:18:08.883 --rc genhtml_function_coverage=1 00:18:08.883 --rc genhtml_legend=1 00:18:08.883 --rc geninfo_all_blocks=1 00:18:08.883 --rc geninfo_unexecuted_blocks=1 00:18:08.883 00:18:08.883 ' 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:08.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.883 --rc genhtml_branch_coverage=1 00:18:08.883 --rc genhtml_function_coverage=1 00:18:08.883 --rc genhtml_legend=1 00:18:08.883 --rc geninfo_all_blocks=1 00:18:08.883 --rc geninfo_unexecuted_blocks=1 00:18:08.883 00:18:08.883 ' 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:08.883 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:08.883 12:56:35 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:18.859 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:18.860 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:18.860 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:18.860 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:18.860 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:18.860 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:18.860 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:18.860 altname enp217s0f0np0 00:18:18.860 altname ens818f0np0 00:18:18.860 inet 192.168.100.8/24 scope global mlx_0_0 00:18:18.860 valid_lft forever preferred_lft forever 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:18.860 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:18.860 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:18.860 altname enp217s0f1np1 00:18:18.860 altname ens818f1np1 00:18:18.860 inet 192.168.100.9/24 scope global mlx_0_1 00:18:18.860 valid_lft forever preferred_lft forever 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:18.860 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:18.861 192.168.100.9' 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:18.861 192.168.100.9' 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:18.861 192.168.100.9' 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=4194178 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 4194178 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 4194178 ']' 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.861 12:56:43 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:18.861 [2024-11-27 12:56:43.888097] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:18:18.861 [2024-11-27 12:56:43.888157] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.861 [2024-11-27 12:56:43.976962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.861 [2024-11-27 12:56:44.016679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.861 [2024-11-27 12:56:44.016713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.861 [2024-11-27 12:56:44.016723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.861 [2024-11-27 12:56:44.016731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.861 [2024-11-27 12:56:44.016738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.861 [2024-11-27 12:56:44.017291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=4194264 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=06584e58-cc78-441d-a165-5e56ea197608 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=6b8b49ec-b2cf-427a-ab87-a0f1449ad08f 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=44415d55-3c73-47d1-b66c-1c3f041cf69b 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:18.861 null0 00:18:18.861 [2024-11-27 12:56:44.208275] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:18:18.861 [2024-11-27 12:56:44.208325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4194264 ] 00:18:18.861 null1 00:18:18.861 null2 00:18:18.861 [2024-11-27 12:56:44.243868] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa81660/0xa91f00) succeed. 00:18:18.861 [2024-11-27 12:56:44.252923] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa82b10/0xb11f40) succeed. 00:18:18.861 [2024-11-27 12:56:44.299015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.861 [2024-11-27 12:56:44.303459] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:18.861 [2024-11-27 12:56:44.339720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 4194264 /var/tmp/tgt2.sock 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 4194264 ']' 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:18:18.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.861 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:18.862 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.862 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:18:18.862 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:18:18.862 [2024-11-27 12:56:44.893421] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x260e090/0x2431ba0) succeed. 00:18:18.862 [2024-11-27 12:56:44.904021] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2605ee0/0x2473240) succeed. 00:18:18.862 [2024-11-27 12:56:44.947678] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:18.862 nvme0n1 nvme0n2 00:18:18.862 nvme1n1 00:18:18.862 12:56:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:18:18.862 12:56:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:18:18.862 12:56:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 06584e58-cc78-441d-a165-5e56ea197608 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=06584e58cc78441da1655e56ea197608 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 06584E58CC78441DA1655E56EA197608 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 06584E58CC78441DA1655E56EA197608 == \0\6\5\8\4\E\5\8\C\C\7\8\4\4\1\D\A\1\6\5\5\E\5\6\E\A\1\9\7\6\0\8 ]] 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 6b8b49ec-b2cf-427a-ab87-a0f1449ad08f 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6b8b49ecb2cf427aab87a0f1449ad08f 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6B8B49ECB2CF427AAB87A0F1449AD08F 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 6B8B49ECB2CF427AAB87A0F1449AD08F == \6\B\8\B\4\9\E\C\B\2\C\F\4\2\7\A\A\B\8\7\A\0\F\1\4\4\9\A\D\0\8\F ]] 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:26.982 12:56:51 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:18:26.982 12:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:26.982 12:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:18:26.982 12:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:26.982 12:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 44415d55-3c73-47d1-b66c-1c3f041cf69b 00:18:26.982 12:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:26.982 12:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:18:26.982 12:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:18:26.982 12:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:18:26.982 12:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:26.982 12:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=44415d553c7347d1b66c1c3f041cf69b 00:18:26.983 12:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 44415D553C7347D1B66C1C3F041CF69B 00:18:26.983 12:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 44415D553C7347D1B66C1C3F041CF69B == \4\4\4\1\5\D\5\5\3\C\7\3\4\7\D\1\B\6\6\C\1\C\3\F\0\4\1\C\F\6\9\B ]] 00:18:26.983 12:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 4194264 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 4194264 ']' 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 4194264 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4194264 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4194264' 00:18:33.736 killing process with pid 4194264 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 4194264 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 4194264 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:33.736 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:33.736 rmmod nvme_rdma 00:18:33.737 rmmod nvme_fabrics 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 4194178 ']' 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 4194178 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 4194178 ']' 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 4194178 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4194178 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4194178' 00:18:33.737 killing process with pid 4194178 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 4194178 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 4194178 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:33.737 00:18:33.737 real 0m24.940s 00:18:33.737 user 0m33.655s 00:18:33.737 sys 0m7.886s 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:33.737 ************************************ 00:18:33.737 END TEST nvmf_nsid 00:18:33.737 ************************************ 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:18:33.737 00:18:33.737 real 8m18.158s 00:18:33.737 user 18m45.864s 00:18:33.737 sys 2m37.878s 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.737 12:56:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:33.737 ************************************ 00:18:33.737 END TEST nvmf_target_extra 00:18:33.737 ************************************ 00:18:33.737 12:57:00 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:18:33.737 12:57:00 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:33.737 12:57:00 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.737 12:57:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:33.737 ************************************ 00:18:33.737 START TEST nvmf_host 00:18:33.737 ************************************ 00:18:33.737 12:57:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:18:33.997 * Looking for test storage... 00:18:33.997 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:33.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.997 --rc genhtml_branch_coverage=1 00:18:33.997 --rc genhtml_function_coverage=1 00:18:33.997 --rc genhtml_legend=1 00:18:33.997 --rc geninfo_all_blocks=1 00:18:33.997 --rc geninfo_unexecuted_blocks=1 00:18:33.997 00:18:33.997 ' 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:33.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.997 --rc genhtml_branch_coverage=1 00:18:33.997 --rc genhtml_function_coverage=1 00:18:33.997 --rc genhtml_legend=1 00:18:33.997 --rc geninfo_all_blocks=1 00:18:33.997 --rc geninfo_unexecuted_blocks=1 00:18:33.997 00:18:33.997 ' 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:33.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.997 --rc genhtml_branch_coverage=1 00:18:33.997 --rc genhtml_function_coverage=1 00:18:33.997 --rc genhtml_legend=1 00:18:33.997 --rc geninfo_all_blocks=1 00:18:33.997 --rc geninfo_unexecuted_blocks=1 00:18:33.997 00:18:33.997 ' 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:33.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.997 --rc genhtml_branch_coverage=1 00:18:33.997 --rc genhtml_function_coverage=1 00:18:33.997 --rc genhtml_legend=1 00:18:33.997 --rc geninfo_all_blocks=1 00:18:33.997 --rc geninfo_unexecuted_blocks=1 00:18:33.997 00:18:33.997 ' 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:33.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.997 ************************************ 00:18:33.997 START TEST nvmf_multicontroller 00:18:33.997 ************************************ 00:18:33.997 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:18:34.257 * Looking for test storage... 00:18:34.257 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.257 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:34.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.258 --rc genhtml_branch_coverage=1 00:18:34.258 --rc genhtml_function_coverage=1 00:18:34.258 --rc genhtml_legend=1 00:18:34.258 --rc geninfo_all_blocks=1 00:18:34.258 --rc geninfo_unexecuted_blocks=1 00:18:34.258 00:18:34.258 ' 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:34.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.258 --rc genhtml_branch_coverage=1 00:18:34.258 --rc genhtml_function_coverage=1 00:18:34.258 --rc genhtml_legend=1 00:18:34.258 --rc geninfo_all_blocks=1 00:18:34.258 --rc geninfo_unexecuted_blocks=1 00:18:34.258 00:18:34.258 ' 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:34.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.258 --rc genhtml_branch_coverage=1 00:18:34.258 --rc genhtml_function_coverage=1 00:18:34.258 --rc genhtml_legend=1 00:18:34.258 --rc geninfo_all_blocks=1 00:18:34.258 --rc geninfo_unexecuted_blocks=1 00:18:34.258 00:18:34.258 ' 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:34.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.258 --rc genhtml_branch_coverage=1 00:18:34.258 --rc genhtml_function_coverage=1 00:18:34.258 --rc genhtml_legend=1 00:18:34.258 --rc geninfo_all_blocks=1 00:18:34.258 --rc geninfo_unexecuted_blocks=1 00:18:34.258 00:18:34.258 ' 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.258 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:18:34.258 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:18:34.258 00:18:34.258 real 0m0.228s 00:18:34.258 user 0m0.122s 00:18:34.258 sys 0m0.123s 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:34.258 ************************************ 00:18:34.258 END TEST nvmf_multicontroller 00:18:34.258 ************************************ 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.258 ************************************ 00:18:34.258 START TEST nvmf_aer 00:18:34.258 ************************************ 00:18:34.258 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:18:34.518 * Looking for test storage... 00:18:34.518 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.518 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:34.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.518 --rc genhtml_branch_coverage=1 00:18:34.519 --rc genhtml_function_coverage=1 00:18:34.519 --rc genhtml_legend=1 00:18:34.519 --rc geninfo_all_blocks=1 00:18:34.519 --rc geninfo_unexecuted_blocks=1 00:18:34.519 00:18:34.519 ' 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:34.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.519 --rc genhtml_branch_coverage=1 00:18:34.519 --rc genhtml_function_coverage=1 00:18:34.519 --rc genhtml_legend=1 00:18:34.519 --rc geninfo_all_blocks=1 00:18:34.519 --rc geninfo_unexecuted_blocks=1 00:18:34.519 00:18:34.519 ' 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:34.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.519 --rc genhtml_branch_coverage=1 00:18:34.519 --rc genhtml_function_coverage=1 00:18:34.519 --rc genhtml_legend=1 00:18:34.519 --rc geninfo_all_blocks=1 00:18:34.519 --rc geninfo_unexecuted_blocks=1 00:18:34.519 00:18:34.519 ' 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:34.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.519 --rc genhtml_branch_coverage=1 00:18:34.519 --rc genhtml_function_coverage=1 00:18:34.519 --rc genhtml_legend=1 00:18:34.519 --rc geninfo_all_blocks=1 00:18:34.519 --rc geninfo_unexecuted_blocks=1 00:18:34.519 00:18:34.519 ' 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.519 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:18:34.519 12:57:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:42.648 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:42.648 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:18:42.648 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:42.648 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:42.648 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:42.648 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:42.648 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:42.648 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:18:42.648 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:42.649 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:42.649 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:42.649 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:42.649 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:18:42.649 12:57:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:42.649 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:42.649 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:42.649 altname enp217s0f0np0 00:18:42.649 altname ens818f0np0 00:18:42.649 inet 192.168.100.8/24 scope global mlx_0_0 00:18:42.649 valid_lft forever preferred_lft forever 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:42.649 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:42.908 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:42.908 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:42.908 altname enp217s0f1np1 00:18:42.908 altname ens818f1np1 00:18:42.908 inet 192.168.100.9/24 scope global mlx_0_1 00:18:42.908 valid_lft forever preferred_lft forever 00:18:42.908 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:18:42.908 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:42.908 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:42.908 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:42.908 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:42.908 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:42.908 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:42.908 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:42.908 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:42.908 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:42.908 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:42.908 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:42.908 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.908 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:42.908 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:42.909 192.168.100.9' 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:42.909 192.168.100.9' 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:42.909 192.168.100.9' 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=8469 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 8469 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 8469 ']' 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.909 12:57:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:42.909 [2024-11-27 12:57:09.185561] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:18:42.909 [2024-11-27 12:57:09.185615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.909 [2024-11-27 12:57:09.275047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:43.169 [2024-11-27 12:57:09.317230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.169 [2024-11-27 12:57:09.317271] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.169 [2024-11-27 12:57:09.317280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.169 [2024-11-27 12:57:09.317288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.169 [2024-11-27 12:57:09.317295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.169 [2024-11-27 12:57:09.318958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.169 [2024-11-27 12:57:09.319055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.169 [2024-11-27 12:57:09.319149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:43.169 [2024-11-27 12:57:09.319150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.737 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.737 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:18:43.737 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:43.737 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:43.737 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:43.737 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.737 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:43.737 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.737 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:43.737 [2024-11-27 12:57:10.115248] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1623df0/0x16282e0) succeed. 00:18:43.997 [2024-11-27 12:57:10.124437] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1625480/0x1669980) succeed. 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:43.997 Malloc0 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:43.997 [2024-11-27 12:57:10.296681] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.997 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:43.997 [ 00:18:43.997 { 00:18:43.997 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:43.997 "subtype": "Discovery", 00:18:43.997 "listen_addresses": [], 00:18:43.997 "allow_any_host": true, 00:18:43.997 "hosts": [] 00:18:43.997 }, 00:18:43.997 { 00:18:43.997 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.997 "subtype": "NVMe", 00:18:43.997 "listen_addresses": [ 00:18:43.997 { 00:18:43.997 "trtype": "RDMA", 00:18:43.997 "adrfam": "IPv4", 00:18:43.997 "traddr": "192.168.100.8", 00:18:43.997 "trsvcid": "4420" 00:18:43.997 } 00:18:43.997 ], 00:18:43.997 "allow_any_host": true, 00:18:43.997 "hosts": [], 00:18:43.997 "serial_number": "SPDK00000000000001", 00:18:43.997 "model_number": "SPDK bdev Controller", 00:18:43.997 "max_namespaces": 2, 00:18:43.998 "min_cntlid": 1, 00:18:43.998 "max_cntlid": 65519, 00:18:43.998 "namespaces": [ 00:18:43.998 { 00:18:43.998 "nsid": 1, 00:18:43.998 "bdev_name": "Malloc0", 00:18:43.998 "name": "Malloc0", 00:18:43.998 "nguid": "3DF8C6E86F8B46B1AB47CF88D231A5F6", 00:18:43.998 "uuid": "3df8c6e8-6f8b-46b1-ab47-cf88d231a5f6" 00:18:43.998 } 00:18:43.998 ] 00:18:43.998 } 00:18:43.998 ] 00:18:43.998 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.998 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:43.998 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:18:43.998 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=8754 00:18:43.998 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:18:43.998 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:18:43.998 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:18:43.998 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:43.998 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:43.998 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:18:43.998 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:44.258 Malloc1 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:44.258 [ 00:18:44.258 { 00:18:44.258 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:44.258 "subtype": "Discovery", 00:18:44.258 "listen_addresses": [], 00:18:44.258 "allow_any_host": true, 00:18:44.258 "hosts": [] 00:18:44.258 }, 00:18:44.258 { 00:18:44.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.258 "subtype": "NVMe", 00:18:44.258 "listen_addresses": [ 00:18:44.258 { 00:18:44.258 "trtype": "RDMA", 00:18:44.258 "adrfam": "IPv4", 00:18:44.258 "traddr": "192.168.100.8", 00:18:44.258 "trsvcid": "4420" 00:18:44.258 } 00:18:44.258 ], 00:18:44.258 "allow_any_host": true, 00:18:44.258 "hosts": [], 00:18:44.258 "serial_number": "SPDK00000000000001", 00:18:44.258 "model_number": "SPDK bdev Controller", 00:18:44.258 "max_namespaces": 2, 00:18:44.258 "min_cntlid": 1, 00:18:44.258 "max_cntlid": 65519, 00:18:44.258 "namespaces": [ 00:18:44.258 { 00:18:44.258 "nsid": 1, 00:18:44.258 "bdev_name": "Malloc0", 00:18:44.258 "name": "Malloc0", 00:18:44.258 "nguid": "3DF8C6E86F8B46B1AB47CF88D231A5F6", 00:18:44.258 "uuid": "3df8c6e8-6f8b-46b1-ab47-cf88d231a5f6" 00:18:44.258 }, 00:18:44.258 { 00:18:44.258 "nsid": 2, 00:18:44.258 "bdev_name": "Malloc1", 00:18:44.258 "name": "Malloc1", 00:18:44.258 "nguid": "0DFCDC418E8E4D5E9B235D746C5DF46A", 00:18:44.258 "uuid": "0dfcdc41-8e8e-4d5e-9b23-5d746c5df46a" 00:18:44.258 } 00:18:44.258 ] 00:18:44.258 } 00:18:44.258 ] 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.258 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 8754 00:18:44.258 Asynchronous Event Request test 00:18:44.258 Attaching to 192.168.100.8 00:18:44.258 Attached to 192.168.100.8 00:18:44.258 Registering asynchronous event callbacks... 00:18:44.258 Starting namespace attribute notice tests for all controllers... 00:18:44.259 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:44.259 aer_cb - Changed Namespace 00:18:44.259 Cleaning up... 00:18:44.259 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:44.259 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.259 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:44.518 rmmod nvme_rdma 00:18:44.518 rmmod nvme_fabrics 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 8469 ']' 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 8469 00:18:44.518 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 8469 ']' 00:18:44.519 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 8469 00:18:44.519 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:18:44.519 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.519 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 8469 00:18:44.519 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:44.519 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:44.519 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 8469' 00:18:44.519 killing process with pid 8469 00:18:44.519 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 8469 00:18:44.519 12:57:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 8469 00:18:44.778 12:57:11 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:44.778 12:57:11 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:44.778 00:18:44.778 real 0m10.429s 00:18:44.778 user 0m9.247s 00:18:44.778 sys 0m6.946s 00:18:44.778 12:57:11 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.778 12:57:11 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:44.778 ************************************ 00:18:44.778 END TEST nvmf_aer 00:18:44.778 ************************************ 00:18:44.778 12:57:11 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:18:44.778 12:57:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:44.778 12:57:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.778 12:57:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.778 ************************************ 00:18:44.778 START TEST nvmf_async_init 00:18:44.778 ************************************ 00:18:44.778 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:18:45.038 * Looking for test storage... 00:18:45.038 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.038 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:45.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.039 --rc genhtml_branch_coverage=1 00:18:45.039 --rc genhtml_function_coverage=1 00:18:45.039 --rc genhtml_legend=1 00:18:45.039 --rc geninfo_all_blocks=1 00:18:45.039 --rc geninfo_unexecuted_blocks=1 00:18:45.039 00:18:45.039 ' 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:45.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.039 --rc genhtml_branch_coverage=1 00:18:45.039 --rc genhtml_function_coverage=1 00:18:45.039 --rc genhtml_legend=1 00:18:45.039 --rc geninfo_all_blocks=1 00:18:45.039 --rc geninfo_unexecuted_blocks=1 00:18:45.039 00:18:45.039 ' 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:45.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.039 --rc genhtml_branch_coverage=1 00:18:45.039 --rc genhtml_function_coverage=1 00:18:45.039 --rc genhtml_legend=1 00:18:45.039 --rc geninfo_all_blocks=1 00:18:45.039 --rc geninfo_unexecuted_blocks=1 00:18:45.039 00:18:45.039 ' 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:45.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.039 --rc genhtml_branch_coverage=1 00:18:45.039 --rc genhtml_function_coverage=1 00:18:45.039 --rc genhtml_legend=1 00:18:45.039 --rc geninfo_all_blocks=1 00:18:45.039 --rc geninfo_unexecuted_blocks=1 00:18:45.039 00:18:45.039 ' 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:45.039 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=78c81c37b92f49b1832e33b8a1739eba 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:18:45.039 12:57:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:53.159 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:53.159 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:53.159 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:53.159 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:53.159 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:53.418 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:53.418 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:53.418 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:53.418 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:53.418 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:53.418 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:53.419 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:53.419 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:53.419 altname enp217s0f0np0 00:18:53.419 altname ens818f0np0 00:18:53.419 inet 192.168.100.8/24 scope global mlx_0_0 00:18:53.419 valid_lft forever preferred_lft forever 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:53.419 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:53.419 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:53.419 altname enp217s0f1np1 00:18:53.419 altname ens818f1np1 00:18:53.419 inet 192.168.100.9/24 scope global mlx_0_1 00:18:53.419 valid_lft forever preferred_lft forever 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:53.419 192.168.100.9' 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:53.419 192.168.100.9' 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:53.419 192.168.100.9' 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=12931 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 12931 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 12931 ']' 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.419 12:57:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:53.419 [2024-11-27 12:57:19.791039] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:18:53.420 [2024-11-27 12:57:19.791090] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.679 [2024-11-27 12:57:19.879615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.679 [2024-11-27 12:57:19.918946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.679 [2024-11-27 12:57:19.918985] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.679 [2024-11-27 12:57:19.918995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.679 [2024-11-27 12:57:19.919003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.679 [2024-11-27 12:57:19.919010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.679 [2024-11-27 12:57:19.919601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.679 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.679 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:18:53.679 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.679 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.679 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:53.679 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.679 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:18:53.679 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.679 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:53.937 [2024-11-27 12:57:20.082803] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb01b80/0xb06070) succeed. 00:18:53.937 [2024-11-27 12:57:20.091956] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb03030/0xb47710) succeed. 00:18:53.937 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:53.938 null0 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 78c81c37b92f49b1832e33b8a1739eba 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:53.938 [2024-11-27 12:57:20.161345] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:53.938 nvme0n1 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:53.938 [ 00:18:53.938 { 00:18:53.938 "name": "nvme0n1", 00:18:53.938 "aliases": [ 00:18:53.938 "78c81c37-b92f-49b1-832e-33b8a1739eba" 00:18:53.938 ], 00:18:53.938 "product_name": "NVMe disk", 00:18:53.938 "block_size": 512, 00:18:53.938 "num_blocks": 2097152, 00:18:53.938 "uuid": "78c81c37-b92f-49b1-832e-33b8a1739eba", 00:18:53.938 "numa_id": 1, 00:18:53.938 "assigned_rate_limits": { 00:18:53.938 "rw_ios_per_sec": 0, 00:18:53.938 "rw_mbytes_per_sec": 0, 00:18:53.938 "r_mbytes_per_sec": 0, 00:18:53.938 "w_mbytes_per_sec": 0 00:18:53.938 }, 00:18:53.938 "claimed": false, 00:18:53.938 "zoned": false, 00:18:53.938 "supported_io_types": { 00:18:53.938 "read": true, 00:18:53.938 "write": true, 00:18:53.938 "unmap": false, 00:18:53.938 "flush": true, 00:18:53.938 "reset": true, 00:18:53.938 "nvme_admin": true, 00:18:53.938 "nvme_io": true, 00:18:53.938 "nvme_io_md": false, 00:18:53.938 "write_zeroes": true, 00:18:53.938 "zcopy": false, 00:18:53.938 "get_zone_info": false, 00:18:53.938 "zone_management": false, 00:18:53.938 "zone_append": false, 00:18:53.938 "compare": true, 00:18:53.938 "compare_and_write": true, 00:18:53.938 "abort": true, 00:18:53.938 "seek_hole": false, 00:18:53.938 "seek_data": false, 00:18:53.938 "copy": true, 00:18:53.938 "nvme_iov_md": false 00:18:53.938 }, 00:18:53.938 "memory_domains": [ 00:18:53.938 { 00:18:53.938 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:53.938 "dma_device_type": 0 00:18:53.938 } 00:18:53.938 ], 00:18:53.938 "driver_specific": { 00:18:53.938 "nvme": [ 00:18:53.938 { 00:18:53.938 "trid": { 00:18:53.938 "trtype": "RDMA", 00:18:53.938 "adrfam": "IPv4", 00:18:53.938 "traddr": "192.168.100.8", 00:18:53.938 "trsvcid": "4420", 00:18:53.938 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:53.938 }, 00:18:53.938 "ctrlr_data": { 00:18:53.938 "cntlid": 1, 00:18:53.938 "vendor_id": "0x8086", 00:18:53.938 "model_number": "SPDK bdev Controller", 00:18:53.938 "serial_number": "00000000000000000000", 00:18:53.938 "firmware_revision": "25.01", 00:18:53.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:53.938 "oacs": { 00:18:53.938 "security": 0, 00:18:53.938 "format": 0, 00:18:53.938 "firmware": 0, 00:18:53.938 "ns_manage": 0 00:18:53.938 }, 00:18:53.938 "multi_ctrlr": true, 00:18:53.938 "ana_reporting": false 00:18:53.938 }, 00:18:53.938 "vs": { 00:18:53.938 "nvme_version": "1.3" 00:18:53.938 }, 00:18:53.938 "ns_data": { 00:18:53.938 "id": 1, 00:18:53.938 "can_share": true 00:18:53.938 } 00:18:53.938 } 00:18:53.938 ], 00:18:53.938 "mp_policy": "active_passive" 00:18:53.938 } 00:18:53.938 } 00:18:53.938 ] 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:53.938 [2024-11-27 12:57:20.267656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:53.938 [2024-11-27 12:57:20.285372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:53.938 [2024-11-27 12:57:20.306848] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.938 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:53.938 [ 00:18:53.938 { 00:18:53.938 "name": "nvme0n1", 00:18:53.938 "aliases": [ 00:18:53.938 "78c81c37-b92f-49b1-832e-33b8a1739eba" 00:18:53.938 ], 00:18:53.938 "product_name": "NVMe disk", 00:18:53.938 "block_size": 512, 00:18:53.938 "num_blocks": 2097152, 00:18:54.197 "uuid": "78c81c37-b92f-49b1-832e-33b8a1739eba", 00:18:54.197 "numa_id": 1, 00:18:54.197 "assigned_rate_limits": { 00:18:54.197 "rw_ios_per_sec": 0, 00:18:54.197 "rw_mbytes_per_sec": 0, 00:18:54.197 "r_mbytes_per_sec": 0, 00:18:54.197 "w_mbytes_per_sec": 0 00:18:54.197 }, 00:18:54.197 "claimed": false, 00:18:54.197 "zoned": false, 00:18:54.197 "supported_io_types": { 00:18:54.197 "read": true, 00:18:54.197 "write": true, 00:18:54.197 "unmap": false, 00:18:54.197 "flush": true, 00:18:54.197 "reset": true, 00:18:54.197 "nvme_admin": true, 00:18:54.197 "nvme_io": true, 00:18:54.197 "nvme_io_md": false, 00:18:54.197 "write_zeroes": true, 00:18:54.197 "zcopy": false, 00:18:54.197 "get_zone_info": false, 00:18:54.197 "zone_management": false, 00:18:54.197 "zone_append": false, 00:18:54.197 "compare": true, 00:18:54.197 "compare_and_write": true, 00:18:54.197 "abort": true, 00:18:54.197 "seek_hole": false, 00:18:54.197 "seek_data": false, 00:18:54.197 "copy": true, 00:18:54.197 "nvme_iov_md": false 00:18:54.197 }, 00:18:54.197 "memory_domains": [ 00:18:54.197 { 00:18:54.197 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:54.197 "dma_device_type": 0 00:18:54.197 } 00:18:54.197 ], 00:18:54.197 "driver_specific": { 00:18:54.197 "nvme": [ 00:18:54.197 { 00:18:54.197 "trid": { 00:18:54.197 "trtype": "RDMA", 00:18:54.197 "adrfam": "IPv4", 00:18:54.197 "traddr": "192.168.100.8", 00:18:54.197 "trsvcid": "4420", 00:18:54.197 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:54.197 }, 00:18:54.197 "ctrlr_data": { 00:18:54.197 "cntlid": 2, 00:18:54.197 "vendor_id": "0x8086", 00:18:54.197 "model_number": "SPDK bdev Controller", 00:18:54.197 "serial_number": "00000000000000000000", 00:18:54.197 "firmware_revision": "25.01", 00:18:54.197 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:54.197 "oacs": { 00:18:54.197 "security": 0, 00:18:54.197 "format": 0, 00:18:54.197 "firmware": 0, 00:18:54.197 "ns_manage": 0 00:18:54.197 }, 00:18:54.197 "multi_ctrlr": true, 00:18:54.197 "ana_reporting": false 00:18:54.197 }, 00:18:54.197 "vs": { 00:18:54.197 "nvme_version": "1.3" 00:18:54.197 }, 00:18:54.197 "ns_data": { 00:18:54.197 "id": 1, 00:18:54.197 "can_share": true 00:18:54.197 } 00:18:54.197 } 00:18:54.197 ], 00:18:54.197 "mp_policy": "active_passive" 00:18:54.197 } 00:18:54.197 } 00:18:54.197 ] 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.DBk9uvknEv 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.DBk9uvknEv 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.DBk9uvknEv 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:54.197 [2024-11-27 12:57:20.394345] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:54.197 [2024-11-27 12:57:20.410385] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.197 nvme0n1 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.197 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:54.197 [ 00:18:54.197 { 00:18:54.197 "name": "nvme0n1", 00:18:54.197 "aliases": [ 00:18:54.197 "78c81c37-b92f-49b1-832e-33b8a1739eba" 00:18:54.197 ], 00:18:54.197 "product_name": "NVMe disk", 00:18:54.197 "block_size": 512, 00:18:54.197 "num_blocks": 2097152, 00:18:54.197 "uuid": "78c81c37-b92f-49b1-832e-33b8a1739eba", 00:18:54.197 "numa_id": 1, 00:18:54.197 "assigned_rate_limits": { 00:18:54.197 "rw_ios_per_sec": 0, 00:18:54.197 "rw_mbytes_per_sec": 0, 00:18:54.197 "r_mbytes_per_sec": 0, 00:18:54.197 "w_mbytes_per_sec": 0 00:18:54.197 }, 00:18:54.197 "claimed": false, 00:18:54.197 "zoned": false, 00:18:54.197 "supported_io_types": { 00:18:54.197 "read": true, 00:18:54.197 "write": true, 00:18:54.197 "unmap": false, 00:18:54.197 "flush": true, 00:18:54.197 "reset": true, 00:18:54.197 "nvme_admin": true, 00:18:54.197 "nvme_io": true, 00:18:54.197 "nvme_io_md": false, 00:18:54.197 "write_zeroes": true, 00:18:54.197 "zcopy": false, 00:18:54.197 "get_zone_info": false, 00:18:54.197 "zone_management": false, 00:18:54.197 "zone_append": false, 00:18:54.197 "compare": true, 00:18:54.197 "compare_and_write": true, 00:18:54.197 "abort": true, 00:18:54.197 "seek_hole": false, 00:18:54.197 "seek_data": false, 00:18:54.197 "copy": true, 00:18:54.197 "nvme_iov_md": false 00:18:54.197 }, 00:18:54.197 "memory_domains": [ 00:18:54.197 { 00:18:54.197 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:54.197 "dma_device_type": 0 00:18:54.197 } 00:18:54.198 ], 00:18:54.198 "driver_specific": { 00:18:54.198 "nvme": [ 00:18:54.198 { 00:18:54.198 "trid": { 00:18:54.198 "trtype": "RDMA", 00:18:54.198 "adrfam": "IPv4", 00:18:54.198 "traddr": "192.168.100.8", 00:18:54.198 "trsvcid": "4421", 00:18:54.198 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:54.198 }, 00:18:54.198 "ctrlr_data": { 00:18:54.198 "cntlid": 3, 00:18:54.198 "vendor_id": "0x8086", 00:18:54.198 "model_number": "SPDK bdev Controller", 00:18:54.198 "serial_number": "00000000000000000000", 00:18:54.198 "firmware_revision": "25.01", 00:18:54.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:54.198 "oacs": { 00:18:54.198 "security": 0, 00:18:54.198 "format": 0, 00:18:54.198 "firmware": 0, 00:18:54.198 "ns_manage": 0 00:18:54.198 }, 00:18:54.198 "multi_ctrlr": true, 00:18:54.198 "ana_reporting": false 00:18:54.198 }, 00:18:54.198 "vs": { 00:18:54.198 "nvme_version": "1.3" 00:18:54.198 }, 00:18:54.198 "ns_data": { 00:18:54.198 "id": 1, 00:18:54.198 "can_share": true 00:18:54.198 } 00:18:54.198 } 00:18:54.198 ], 00:18:54.198 "mp_policy": "active_passive" 00:18:54.198 } 00:18:54.198 } 00:18:54.198 ] 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.DBk9uvknEv 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:54.198 rmmod nvme_rdma 00:18:54.198 rmmod nvme_fabrics 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:18:54.198 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:18:54.457 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 12931 ']' 00:18:54.457 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 12931 00:18:54.457 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 12931 ']' 00:18:54.457 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 12931 00:18:54.457 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:18:54.457 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.457 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 12931 00:18:54.457 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:54.457 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:54.457 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 12931' 00:18:54.457 killing process with pid 12931 00:18:54.457 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 12931 00:18:54.457 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 12931 00:18:54.715 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:54.716 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:54.716 00:18:54.716 real 0m9.725s 00:18:54.716 user 0m3.577s 00:18:54.716 sys 0m6.810s 00:18:54.716 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.716 12:57:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:54.716 ************************************ 00:18:54.716 END TEST nvmf_async_init 00:18:54.716 ************************************ 00:18:54.716 12:57:20 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:18:54.716 12:57:20 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:54.716 12:57:20 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.716 12:57:20 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.716 ************************************ 00:18:54.716 START TEST dma 00:18:54.716 ************************************ 00:18:54.716 12:57:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:18:54.716 * Looking for test storage... 00:18:54.716 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:54.716 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:54.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.973 --rc genhtml_branch_coverage=1 00:18:54.973 --rc genhtml_function_coverage=1 00:18:54.973 --rc genhtml_legend=1 00:18:54.973 --rc geninfo_all_blocks=1 00:18:54.973 --rc geninfo_unexecuted_blocks=1 00:18:54.973 00:18:54.973 ' 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:54.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.973 --rc genhtml_branch_coverage=1 00:18:54.973 --rc genhtml_function_coverage=1 00:18:54.973 --rc genhtml_legend=1 00:18:54.973 --rc geninfo_all_blocks=1 00:18:54.973 --rc geninfo_unexecuted_blocks=1 00:18:54.973 00:18:54.973 ' 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:54.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.973 --rc genhtml_branch_coverage=1 00:18:54.973 --rc genhtml_function_coverage=1 00:18:54.973 --rc genhtml_legend=1 00:18:54.973 --rc geninfo_all_blocks=1 00:18:54.973 --rc geninfo_unexecuted_blocks=1 00:18:54.973 00:18:54.973 ' 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:54.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.973 --rc genhtml_branch_coverage=1 00:18:54.973 --rc genhtml_function_coverage=1 00:18:54.973 --rc genhtml_legend=1 00:18:54.973 --rc geninfo_all_blocks=1 00:18:54.973 --rc geninfo_unexecuted_blocks=1 00:18:54.973 00:18:54.973 ' 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:54.973 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:54.974 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:18:54.974 12:57:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:03.109 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:03.109 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:03.109 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:03.109 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:03.109 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:03.110 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:03.110 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:03.110 altname enp217s0f0np0 00:19:03.110 altname ens818f0np0 00:19:03.110 inet 192.168.100.8/24 scope global mlx_0_0 00:19:03.110 valid_lft forever preferred_lft forever 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:03.110 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:03.110 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:03.110 altname enp217s0f1np1 00:19:03.110 altname ens818f1np1 00:19:03.110 inet 192.168.100.9/24 scope global mlx_0_1 00:19:03.110 valid_lft forever preferred_lft forever 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:03.110 192.168.100.9' 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:03.110 192.168.100.9' 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:03.110 192.168.100.9' 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:19:03.110 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:19:03.367 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:03.367 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:03.367 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:03.367 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:03.367 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:03.367 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:03.367 12:57:29 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:19:03.367 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:03.367 12:57:29 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.367 12:57:29 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:03.367 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=17144 00:19:03.368 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:03.368 12:57:29 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 17144 00:19:03.368 12:57:29 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 17144 ']' 00:19:03.368 12:57:29 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.368 12:57:29 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.368 12:57:29 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.368 12:57:29 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.368 12:57:29 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:03.368 [2024-11-27 12:57:29.580777] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:19:03.368 [2024-11-27 12:57:29.580827] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.368 [2024-11-27 12:57:29.668060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:03.368 [2024-11-27 12:57:29.707764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.368 [2024-11-27 12:57:29.707803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.368 [2024-11-27 12:57:29.707813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.368 [2024-11-27 12:57:29.707823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.368 [2024-11-27 12:57:29.707831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.368 [2024-11-27 12:57:29.709134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.368 [2024-11-27 12:57:29.709137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:04.302 [2024-11-27 12:57:30.477642] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa16730/0xa1ac20) succeed. 00:19:04.302 [2024-11-27 12:57:30.486651] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa17c80/0xa5c2c0) succeed. 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:04.302 Malloc0 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:04.302 [2024-11-27 12:57:30.633888] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:04.302 { 00:19:04.302 "params": { 00:19:04.302 "name": "Nvme$subsystem", 00:19:04.302 "trtype": "$TEST_TRANSPORT", 00:19:04.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.302 "adrfam": "ipv4", 00:19:04.302 "trsvcid": "$NVMF_PORT", 00:19:04.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.302 "hdgst": ${hdgst:-false}, 00:19:04.302 "ddgst": ${ddgst:-false} 00:19:04.302 }, 00:19:04.302 "method": "bdev_nvme_attach_controller" 00:19:04.302 } 00:19:04.302 EOF 00:19:04.302 )") 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:19:04.302 12:57:30 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:04.302 "params": { 00:19:04.302 "name": "Nvme0", 00:19:04.302 "trtype": "rdma", 00:19:04.302 "traddr": "192.168.100.8", 00:19:04.302 "adrfam": "ipv4", 00:19:04.302 "trsvcid": "4420", 00:19:04.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:04.302 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:04.302 "hdgst": false, 00:19:04.302 "ddgst": false 00:19:04.302 }, 00:19:04.302 "method": "bdev_nvme_attach_controller" 00:19:04.302 }' 00:19:04.561 [2024-11-27 12:57:30.686042] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:19:04.561 [2024-11-27 12:57:30.686086] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid17423 ] 00:19:04.561 [2024-11-27 12:57:30.772903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:04.561 [2024-11-27 12:57:30.813652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:04.561 [2024-11-27 12:57:30.813657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.833 bdev Nvme0n1 reports 1 memory domains 00:19:09.833 bdev Nvme0n1 supports RDMA memory domain 00:19:09.833 Initialization complete, running randrw IO for 5 sec on 2 cores 00:19:09.833 ========================================================================== 00:19:09.833 Latency [us] 00:19:09.833 IOPS MiB/s Average min max 00:19:09.833 Core 2: 21385.11 83.54 747.34 253.93 7823.44 00:19:09.833 Core 3: 21409.70 83.63 746.51 248.36 7980.95 00:19:09.833 ========================================================================== 00:19:09.833 Total : 42794.80 167.17 746.93 248.36 7980.95 00:19:09.833 00:19:09.833 Total operations: 214051, translate 214051 pull_push 0 memzero 0 00:19:09.833 12:57:36 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:19:09.833 12:57:36 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:19:09.833 12:57:36 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:19:10.092 [2024-11-27 12:57:36.233493] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:19:10.092 [2024-11-27 12:57:36.233552] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid18235 ] 00:19:10.092 [2024-11-27 12:57:36.317753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:10.092 [2024-11-27 12:57:36.356154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:10.092 [2024-11-27 12:57:36.356156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.362 bdev Malloc0 reports 2 memory domains 00:19:15.362 bdev Malloc0 doesn't support RDMA memory domain 00:19:15.362 Initialization complete, running randrw IO for 5 sec on 2 cores 00:19:15.362 ========================================================================== 00:19:15.362 Latency [us] 00:19:15.362 IOPS MiB/s Average min max 00:19:15.362 Core 2: 14159.71 55.31 1129.28 424.99 1455.09 00:19:15.362 Core 3: 14404.83 56.27 1110.04 454.52 2090.22 00:19:15.362 ========================================================================== 00:19:15.362 Total : 28564.54 111.58 1119.58 424.99 2090.22 00:19:15.362 00:19:15.362 Total operations: 142874, translate 0 pull_push 571496 memzero 0 00:19:15.362 12:57:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:19:15.362 12:57:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:19:15.362 12:57:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:19:15.362 12:57:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:19:15.362 Ignoring -M option 00:19:15.362 [2024-11-27 12:57:41.679570] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:19:15.362 [2024-11-27 12:57:41.679626] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid19283 ] 00:19:15.621 [2024-11-27 12:57:41.765779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:15.621 [2024-11-27 12:57:41.806221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:15.621 [2024-11-27 12:57:41.806225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.888 bdev b83a2337-bbe7-494f-abcc-9229316ac0e1 reports 1 memory domains 00:19:20.888 bdev b83a2337-bbe7-494f-abcc-9229316ac0e1 supports RDMA memory domain 00:19:20.888 Initialization complete, running randread IO for 5 sec on 2 cores 00:19:20.888 ========================================================================== 00:19:20.888 Latency [us] 00:19:20.888 IOPS MiB/s Average min max 00:19:20.888 Core 2: 77517.94 302.80 205.68 75.06 3649.82 00:19:20.888 Core 3: 80791.78 315.59 197.34 68.21 3670.75 00:19:20.888 ========================================================================== 00:19:20.888 Total : 158309.71 618.40 201.42 68.21 3670.75 00:19:20.888 00:19:20.888 Total operations: 791635, translate 0 pull_push 0 memzero 791635 00:19:20.888 12:57:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:19:21.146 [2024-11-27 12:57:47.356120] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:23.678 Initializing NVMe Controllers 00:19:23.678 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:19:23.678 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:19:23.678 Initialization complete. Launching workers. 00:19:23.678 ======================================================== 00:19:23.678 Latency(us) 00:19:23.678 Device Information : IOPS MiB/s Average min max 00:19:23.678 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2012.75 7.86 7979.99 7950.57 7997.76 00:19:23.678 ======================================================== 00:19:23.678 Total : 2012.75 7.86 7979.99 7950.57 7997.76 00:19:23.678 00:19:23.678 12:57:49 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:19:23.678 12:57:49 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:19:23.678 12:57:49 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:19:23.678 12:57:49 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:19:23.678 [2024-11-27 12:57:49.699381] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:19:23.678 [2024-11-27 12:57:49.699427] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid20616 ] 00:19:23.678 [2024-11-27 12:57:49.785821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:23.678 [2024-11-27 12:57:49.826306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.678 [2024-11-27 12:57:49.826309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.947 bdev fa5c9b44-3d05-40f9-9e65-2cd0bf27e296 reports 1 memory domains 00:19:28.947 bdev fa5c9b44-3d05-40f9-9e65-2cd0bf27e296 supports RDMA memory domain 00:19:28.947 Initialization complete, running randrw IO for 5 sec on 2 cores 00:19:28.947 ========================================================================== 00:19:28.947 Latency [us] 00:19:28.947 IOPS MiB/s Average min max 00:19:28.947 Core 2: 18851.00 73.64 848.07 15.49 8846.57 00:19:28.947 Core 3: 19155.72 74.83 834.61 10.78 9047.73 00:19:28.947 ========================================================================== 00:19:28.947 Total : 38006.72 148.46 841.28 10.78 9047.73 00:19:28.947 00:19:28.947 Total operations: 190082, translate 189974 pull_push 0 memzero 108 00:19:28.947 12:57:55 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:28.948 rmmod nvme_rdma 00:19:28.948 rmmod nvme_fabrics 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 17144 ']' 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 17144 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 17144 ']' 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 17144 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.948 12:57:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 17144 00:19:29.207 12:57:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:29.207 12:57:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:29.207 12:57:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 17144' 00:19:29.207 killing process with pid 17144 00:19:29.207 12:57:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 17144 00:19:29.207 12:57:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 17144 00:19:29.467 12:57:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:29.467 12:57:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:29.467 00:19:29.467 real 0m34.744s 00:19:29.467 user 1m37.048s 00:19:29.467 sys 0m7.667s 00:19:29.467 12:57:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.467 12:57:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:29.467 ************************************ 00:19:29.467 END TEST dma 00:19:29.467 ************************************ 00:19:29.467 12:57:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:19:29.467 12:57:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:29.467 12:57:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.467 12:57:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.467 ************************************ 00:19:29.467 START TEST nvmf_identify 00:19:29.467 ************************************ 00:19:29.467 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:19:29.467 * Looking for test storage... 00:19:29.467 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:29.467 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:29.467 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:19:29.467 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:29.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.727 --rc genhtml_branch_coverage=1 00:19:29.727 --rc genhtml_function_coverage=1 00:19:29.727 --rc genhtml_legend=1 00:19:29.727 --rc geninfo_all_blocks=1 00:19:29.727 --rc geninfo_unexecuted_blocks=1 00:19:29.727 00:19:29.727 ' 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:29.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.727 --rc genhtml_branch_coverage=1 00:19:29.727 --rc genhtml_function_coverage=1 00:19:29.727 --rc genhtml_legend=1 00:19:29.727 --rc geninfo_all_blocks=1 00:19:29.727 --rc geninfo_unexecuted_blocks=1 00:19:29.727 00:19:29.727 ' 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:29.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.727 --rc genhtml_branch_coverage=1 00:19:29.727 --rc genhtml_function_coverage=1 00:19:29.727 --rc genhtml_legend=1 00:19:29.727 --rc geninfo_all_blocks=1 00:19:29.727 --rc geninfo_unexecuted_blocks=1 00:19:29.727 00:19:29.727 ' 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:29.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.727 --rc genhtml_branch_coverage=1 00:19:29.727 --rc genhtml_function_coverage=1 00:19:29.727 --rc genhtml_legend=1 00:19:29.727 --rc geninfo_all_blocks=1 00:19:29.727 --rc geninfo_unexecuted_blocks=1 00:19:29.727 00:19:29.727 ' 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.727 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:29.728 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:19:29.728 12:57:55 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:37.858 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:37.858 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:19:37.858 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:37.858 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:37.858 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:37.858 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:37.859 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:37.859 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:37.859 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:37.859 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:37.859 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:37.859 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:37.859 altname enp217s0f0np0 00:19:37.859 altname ens818f0np0 00:19:37.859 inet 192.168.100.8/24 scope global mlx_0_0 00:19:37.859 valid_lft forever preferred_lft forever 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:37.859 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:37.860 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:37.860 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:37.860 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:37.860 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:37.860 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:37.860 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:37.860 altname enp217s0f1np1 00:19:37.860 altname ens818f1np1 00:19:37.860 inet 192.168.100.9/24 scope global mlx_0_1 00:19:37.860 valid_lft forever preferred_lft forever 00:19:37.860 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:19:37.860 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:37.860 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:37.860 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:37.860 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:37.860 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:37.860 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:37.860 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:37.860 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:37.860 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:38.119 192.168.100.9' 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:38.119 192.168.100.9' 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:38.119 192.168.100.9' 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=25587 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 25587 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 25587 ']' 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.119 12:58:04 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:38.119 [2024-11-27 12:58:04.382882] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:19:38.119 [2024-11-27 12:58:04.382944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.119 [2024-11-27 12:58:04.473228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:38.379 [2024-11-27 12:58:04.513998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.379 [2024-11-27 12:58:04.514037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.379 [2024-11-27 12:58:04.514046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.379 [2024-11-27 12:58:04.514054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.379 [2024-11-27 12:58:04.514077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.379 [2024-11-27 12:58:04.515818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.379 [2024-11-27 12:58:04.515942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.379 [2024-11-27 12:58:04.516025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:38.379 [2024-11-27 12:58:04.516027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.945 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:38.945 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:19:38.945 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:38.945 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.945 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:38.945 [2024-11-27 12:58:05.258017] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14c4df0/0x14c92e0) succeed. 00:19:38.945 [2024-11-27 12:58:05.267312] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14c6480/0x150a980) succeed. 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:39.204 Malloc0 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:39.204 [2024-11-27 12:58:05.493758] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.204 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:39.204 [ 00:19:39.204 { 00:19:39.204 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:39.204 "subtype": "Discovery", 00:19:39.204 "listen_addresses": [ 00:19:39.204 { 00:19:39.204 "trtype": "RDMA", 00:19:39.204 "adrfam": "IPv4", 00:19:39.204 "traddr": "192.168.100.8", 00:19:39.204 "trsvcid": "4420" 00:19:39.204 } 00:19:39.204 ], 00:19:39.204 "allow_any_host": true, 00:19:39.204 "hosts": [] 00:19:39.204 }, 00:19:39.204 { 00:19:39.204 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.204 "subtype": "NVMe", 00:19:39.204 "listen_addresses": [ 00:19:39.204 { 00:19:39.204 "trtype": "RDMA", 00:19:39.204 "adrfam": "IPv4", 00:19:39.204 "traddr": "192.168.100.8", 00:19:39.204 "trsvcid": "4420" 00:19:39.204 } 00:19:39.204 ], 00:19:39.204 "allow_any_host": true, 00:19:39.204 "hosts": [], 00:19:39.204 "serial_number": "SPDK00000000000001", 00:19:39.204 "model_number": "SPDK bdev Controller", 00:19:39.204 "max_namespaces": 32, 00:19:39.204 "min_cntlid": 1, 00:19:39.204 "max_cntlid": 65519, 00:19:39.204 "namespaces": [ 00:19:39.204 { 00:19:39.204 "nsid": 1, 00:19:39.204 "bdev_name": "Malloc0", 00:19:39.204 "name": "Malloc0", 00:19:39.204 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:39.204 "eui64": "ABCDEF0123456789", 00:19:39.204 "uuid": "e4a521a9-18fb-407e-91ea-301525459894" 00:19:39.204 } 00:19:39.204 ] 00:19:39.205 } 00:19:39.205 ] 00:19:39.205 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.205 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:39.205 [2024-11-27 12:58:05.552404] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:19:39.205 [2024-11-27 12:58:05.552448] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid25880 ] 00:19:39.471 [2024-11-27 12:58:05.614846] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:19:39.471 [2024-11-27 12:58:05.614923] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:19:39.471 [2024-11-27 12:58:05.614937] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:19:39.471 [2024-11-27 12:58:05.614942] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:19:39.471 [2024-11-27 12:58:05.614975] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:19:39.471 [2024-11-27 12:58:05.626170] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:19:39.471 [2024-11-27 12:58:05.636307] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:39.471 [2024-11-27 12:58:05.636318] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:19:39.471 [2024-11-27 12:58:05.636325] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636332] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636338] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636345] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636351] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636359] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636366] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636372] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636378] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636384] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636390] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636396] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636402] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636408] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636414] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636421] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636427] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636433] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636439] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636445] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636451] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636457] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636463] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636469] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636476] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636482] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636488] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636494] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636500] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636506] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636512] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636518] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:19:39.471 [2024-11-27 12:58:05.636523] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:39.471 [2024-11-27 12:58:05.636528] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:19:39.471 [2024-11-27 12:58:05.636550] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.636563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x180300 00:19:39.471 [2024-11-27 12:58:05.641615] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.471 [2024-11-27 12:58:05.641627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:39.471 [2024-11-27 12:58:05.641635] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180300 00:19:39.471 [2024-11-27 12:58:05.641643] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:39.471 [2024-11-27 12:58:05.641650] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:19:39.471 [2024-11-27 12:58:05.641657] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:19:39.471 [2024-11-27 12:58:05.641672] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.641681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.472 [2024-11-27 12:58:05.641707] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.472 [2024-11-27 12:58:05.641714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:19:39.472 [2024-11-27 12:58:05.641721] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:19:39.472 [2024-11-27 12:58:05.641727] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.641734] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:19:39.472 [2024-11-27 12:58:05.641742] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.641750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.472 [2024-11-27 12:58:05.641769] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.472 [2024-11-27 12:58:05.641775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:19:39.472 [2024-11-27 12:58:05.641781] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:19:39.472 [2024-11-27 12:58:05.641787] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.641795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:19:39.472 [2024-11-27 12:58:05.641802] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.641810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.472 [2024-11-27 12:58:05.641829] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.472 [2024-11-27 12:58:05.641834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:39.472 [2024-11-27 12:58:05.641841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:39.472 [2024-11-27 12:58:05.641847] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.641855] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.641863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.472 [2024-11-27 12:58:05.641881] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.472 [2024-11-27 12:58:05.641888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:39.472 [2024-11-27 12:58:05.641895] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:19:39.472 [2024-11-27 12:58:05.641901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:19:39.472 [2024-11-27 12:58:05.641907] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.641914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:39.472 [2024-11-27 12:58:05.642023] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:19:39.472 [2024-11-27 12:58:05.642029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:39.472 [2024-11-27 12:58:05.642039] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.642047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.472 [2024-11-27 12:58:05.642063] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.472 [2024-11-27 12:58:05.642069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:39.472 [2024-11-27 12:58:05.642075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:39.472 [2024-11-27 12:58:05.642081] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.642090] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.642098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.472 [2024-11-27 12:58:05.642116] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.472 [2024-11-27 12:58:05.642122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:39.472 [2024-11-27 12:58:05.642128] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:39.472 [2024-11-27 12:58:05.642134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:19:39.472 [2024-11-27 12:58:05.642141] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.642149] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:19:39.472 [2024-11-27 12:58:05.642158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:19:39.472 [2024-11-27 12:58:05.642169] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.642177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180300 00:19:39.472 [2024-11-27 12:58:05.642209] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.472 [2024-11-27 12:58:05.642215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:39.472 [2024-11-27 12:58:05.642226] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:19:39.472 [2024-11-27 12:58:05.642233] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:19:39.472 [2024-11-27 12:58:05.642239] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:19:39.472 [2024-11-27 12:58:05.642249] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:19:39.472 [2024-11-27 12:58:05.642256] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:19:39.472 [2024-11-27 12:58:05.642261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:19:39.472 [2024-11-27 12:58:05.642267] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.642276] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:19:39.472 [2024-11-27 12:58:05.642284] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.642292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.472 [2024-11-27 12:58:05.642310] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.472 [2024-11-27 12:58:05.642316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:39.472 [2024-11-27 12:58:05.642325] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.642333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.472 [2024-11-27 12:58:05.642341] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.642348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.472 [2024-11-27 12:58:05.642355] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.642364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.472 [2024-11-27 12:58:05.642371] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.642378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.472 [2024-11-27 12:58:05.642384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:39.472 [2024-11-27 12:58:05.642390] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.642398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:39.472 [2024-11-27 12:58:05.642406] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.642413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.472 [2024-11-27 12:58:05.642432] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.472 [2024-11-27 12:58:05.642438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:19:39.472 [2024-11-27 12:58:05.642446] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:19:39.472 [2024-11-27 12:58:05.642452] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:19:39.472 [2024-11-27 12:58:05.642458] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.642467] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.642475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180300 00:19:39.472 [2024-11-27 12:58:05.642499] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.472 [2024-11-27 12:58:05.642504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:39.472 [2024-11-27 12:58:05.642512] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180300 00:19:39.472 [2024-11-27 12:58:05.642522] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:19:39.473 [2024-11-27 12:58:05.642543] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.473 [2024-11-27 12:58:05.642551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x180300 00:19:39.473 [2024-11-27 12:58:05.642559] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180300 00:19:39.473 [2024-11-27 12:58:05.642566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.473 [2024-11-27 12:58:05.642577] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.473 [2024-11-27 12:58:05.642583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:39.473 [2024-11-27 12:58:05.642594] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x180300 00:19:39.473 [2024-11-27 12:58:05.642602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x180300 00:19:39.473 [2024-11-27 12:58:05.642612] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180300 00:19:39.473 [2024-11-27 12:58:05.642619] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.473 [2024-11-27 12:58:05.642624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:39.473 [2024-11-27 12:58:05.642630] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180300 00:19:39.473 [2024-11-27 12:58:05.642637] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.473 [2024-11-27 12:58:05.642642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:39.473 [2024-11-27 12:58:05.642652] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180300 00:19:39.473 [2024-11-27 12:58:05.642659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x180300 00:19:39.473 [2024-11-27 12:58:05.642666] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180300 00:19:39.473 [2024-11-27 12:58:05.642687] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.473 [2024-11-27 12:58:05.642693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:39.473 [2024-11-27 12:58:05.642705] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180300 00:19:39.473 ===================================================== 00:19:39.473 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:39.473 ===================================================== 00:19:39.473 Controller Capabilities/Features 00:19:39.473 ================================ 00:19:39.473 Vendor ID: 0000 00:19:39.473 Subsystem Vendor ID: 0000 00:19:39.473 Serial Number: .................... 00:19:39.473 Model Number: ........................................ 00:19:39.473 Firmware Version: 25.01 00:19:39.473 Recommended Arb Burst: 0 00:19:39.473 IEEE OUI Identifier: 00 00 00 00:19:39.473 Multi-path I/O 00:19:39.473 May have multiple subsystem ports: No 00:19:39.473 May have multiple controllers: No 00:19:39.473 Associated with SR-IOV VF: No 00:19:39.473 Max Data Transfer Size: 131072 00:19:39.473 Max Number of Namespaces: 0 00:19:39.473 Max Number of I/O Queues: 1024 00:19:39.473 NVMe Specification Version (VS): 1.3 00:19:39.473 NVMe Specification Version (Identify): 1.3 00:19:39.473 Maximum Queue Entries: 128 00:19:39.473 Contiguous Queues Required: Yes 00:19:39.473 Arbitration Mechanisms Supported 00:19:39.473 Weighted Round Robin: Not Supported 00:19:39.473 Vendor Specific: Not Supported 00:19:39.473 Reset Timeout: 15000 ms 00:19:39.473 Doorbell Stride: 4 bytes 00:19:39.473 NVM Subsystem Reset: Not Supported 00:19:39.473 Command Sets Supported 00:19:39.473 NVM Command Set: Supported 00:19:39.473 Boot Partition: Not Supported 00:19:39.473 Memory Page Size Minimum: 4096 bytes 00:19:39.473 Memory Page Size Maximum: 4096 bytes 00:19:39.473 Persistent Memory Region: Not Supported 00:19:39.473 Optional Asynchronous Events Supported 00:19:39.473 Namespace Attribute Notices: Not Supported 00:19:39.473 Firmware Activation Notices: Not Supported 00:19:39.473 ANA Change Notices: Not Supported 00:19:39.473 PLE Aggregate Log Change Notices: Not Supported 00:19:39.473 LBA Status Info Alert Notices: Not Supported 00:19:39.473 EGE Aggregate Log Change Notices: Not Supported 00:19:39.473 Normal NVM Subsystem Shutdown event: Not Supported 00:19:39.473 Zone Descriptor Change Notices: Not Supported 00:19:39.473 Discovery Log Change Notices: Supported 00:19:39.473 Controller Attributes 00:19:39.473 128-bit Host Identifier: Not Supported 00:19:39.473 Non-Operational Permissive Mode: Not Supported 00:19:39.473 NVM Sets: Not Supported 00:19:39.473 Read Recovery Levels: Not Supported 00:19:39.473 Endurance Groups: Not Supported 00:19:39.473 Predictable Latency Mode: Not Supported 00:19:39.473 Traffic Based Keep ALive: Not Supported 00:19:39.473 Namespace Granularity: Not Supported 00:19:39.473 SQ Associations: Not Supported 00:19:39.473 UUID List: Not Supported 00:19:39.473 Multi-Domain Subsystem: Not Supported 00:19:39.473 Fixed Capacity Management: Not Supported 00:19:39.473 Variable Capacity Management: Not Supported 00:19:39.473 Delete Endurance Group: Not Supported 00:19:39.473 Delete NVM Set: Not Supported 00:19:39.473 Extended LBA Formats Supported: Not Supported 00:19:39.473 Flexible Data Placement Supported: Not Supported 00:19:39.473 00:19:39.473 Controller Memory Buffer Support 00:19:39.473 ================================ 00:19:39.473 Supported: No 00:19:39.473 00:19:39.473 Persistent Memory Region Support 00:19:39.473 ================================ 00:19:39.473 Supported: No 00:19:39.473 00:19:39.473 Admin Command Set Attributes 00:19:39.473 ============================ 00:19:39.473 Security Send/Receive: Not Supported 00:19:39.473 Format NVM: Not Supported 00:19:39.473 Firmware Activate/Download: Not Supported 00:19:39.473 Namespace Management: Not Supported 00:19:39.473 Device Self-Test: Not Supported 00:19:39.473 Directives: Not Supported 00:19:39.473 NVMe-MI: Not Supported 00:19:39.473 Virtualization Management: Not Supported 00:19:39.473 Doorbell Buffer Config: Not Supported 00:19:39.473 Get LBA Status Capability: Not Supported 00:19:39.473 Command & Feature Lockdown Capability: Not Supported 00:19:39.473 Abort Command Limit: 1 00:19:39.473 Async Event Request Limit: 4 00:19:39.473 Number of Firmware Slots: N/A 00:19:39.473 Firmware Slot 1 Read-Only: N/A 00:19:39.473 Firmware Activation Without Reset: N/A 00:19:39.473 Multiple Update Detection Support: N/A 00:19:39.473 Firmware Update Granularity: No Information Provided 00:19:39.473 Per-Namespace SMART Log: No 00:19:39.473 Asymmetric Namespace Access Log Page: Not Supported 00:19:39.473 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:39.473 Command Effects Log Page: Not Supported 00:19:39.473 Get Log Page Extended Data: Supported 00:19:39.473 Telemetry Log Pages: Not Supported 00:19:39.473 Persistent Event Log Pages: Not Supported 00:19:39.473 Supported Log Pages Log Page: May Support 00:19:39.473 Commands Supported & Effects Log Page: Not Supported 00:19:39.473 Feature Identifiers & Effects Log Page:May Support 00:19:39.473 NVMe-MI Commands & Effects Log Page: May Support 00:19:39.473 Data Area 4 for Telemetry Log: Not Supported 00:19:39.473 Error Log Page Entries Supported: 128 00:19:39.473 Keep Alive: Not Supported 00:19:39.473 00:19:39.473 NVM Command Set Attributes 00:19:39.473 ========================== 00:19:39.473 Submission Queue Entry Size 00:19:39.473 Max: 1 00:19:39.473 Min: 1 00:19:39.473 Completion Queue Entry Size 00:19:39.473 Max: 1 00:19:39.473 Min: 1 00:19:39.473 Number of Namespaces: 0 00:19:39.473 Compare Command: Not Supported 00:19:39.473 Write Uncorrectable Command: Not Supported 00:19:39.473 Dataset Management Command: Not Supported 00:19:39.473 Write Zeroes Command: Not Supported 00:19:39.473 Set Features Save Field: Not Supported 00:19:39.473 Reservations: Not Supported 00:19:39.473 Timestamp: Not Supported 00:19:39.473 Copy: Not Supported 00:19:39.473 Volatile Write Cache: Not Present 00:19:39.473 Atomic Write Unit (Normal): 1 00:19:39.473 Atomic Write Unit (PFail): 1 00:19:39.473 Atomic Compare & Write Unit: 1 00:19:39.473 Fused Compare & Write: Supported 00:19:39.473 Scatter-Gather List 00:19:39.473 SGL Command Set: Supported 00:19:39.473 SGL Keyed: Supported 00:19:39.473 SGL Bit Bucket Descriptor: Not Supported 00:19:39.473 SGL Metadata Pointer: Not Supported 00:19:39.473 Oversized SGL: Not Supported 00:19:39.473 SGL Metadata Address: Not Supported 00:19:39.473 SGL Offset: Supported 00:19:39.473 Transport SGL Data Block: Not Supported 00:19:39.473 Replay Protected Memory Block: Not Supported 00:19:39.473 00:19:39.473 Firmware Slot Information 00:19:39.473 ========================= 00:19:39.473 Active slot: 0 00:19:39.473 00:19:39.473 00:19:39.473 Error Log 00:19:39.473 ========= 00:19:39.473 00:19:39.473 Active Namespaces 00:19:39.473 ================= 00:19:39.473 Discovery Log Page 00:19:39.473 ================== 00:19:39.473 Generation Counter: 2 00:19:39.473 Number of Records: 2 00:19:39.473 Record Format: 0 00:19:39.473 00:19:39.473 Discovery Log Entry 0 00:19:39.474 ---------------------- 00:19:39.474 Transport Type: 1 (RDMA) 00:19:39.474 Address Family: 1 (IPv4) 00:19:39.474 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:39.474 Entry Flags: 00:19:39.474 Duplicate Returned Information: 1 00:19:39.474 Explicit Persistent Connection Support for Discovery: 1 00:19:39.474 Transport Requirements: 00:19:39.474 Secure Channel: Not Required 00:19:39.474 Port ID: 0 (0x0000) 00:19:39.474 Controller ID: 65535 (0xffff) 00:19:39.474 Admin Max SQ Size: 128 00:19:39.474 Transport Service Identifier: 4420 00:19:39.474 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:39.474 Transport Address: 192.168.100.8 00:19:39.474 Transport Specific Address Subtype - RDMA 00:19:39.474 RDMA QP Service Type: 1 (Reliable Connected) 00:19:39.474 RDMA Provider Type: 1 (No provider specified) 00:19:39.474 RDMA CM Service: 1 (RDMA_CM) 00:19:39.474 Discovery Log Entry 1 00:19:39.474 ---------------------- 00:19:39.474 Transport Type: 1 (RDMA) 00:19:39.474 Address Family: 1 (IPv4) 00:19:39.474 Subsystem Type: 2 (NVM Subsystem) 00:19:39.474 Entry Flags: 00:19:39.474 Duplicate Returned Information: 0 00:19:39.474 Explicit Persistent Connection Support for Discovery: 0 00:19:39.474 Transport Requirements: 00:19:39.474 Secure Channel: Not Required 00:19:39.474 Port ID: 0 (0x0000) 00:19:39.474 Controller ID: 65535 (0xffff) 00:19:39.474 Admin Max SQ Size: [2024-11-27 12:58:05.642773] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:19:39.474 [2024-11-27 12:58:05.642783] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 55958 doesn't match qid 00:19:39.474 [2024-11-27 12:58:05.642797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:d202e370 sqhd:1a40 p:0 m:0 dnr:0 00:19:39.474 [2024-11-27 12:58:05.642803] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 55958 doesn't match qid 00:19:39.474 [2024-11-27 12:58:05.642811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:d202e370 sqhd:1a40 p:0 m:0 dnr:0 00:19:39.474 [2024-11-27 12:58:05.642817] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 55958 doesn't match qid 00:19:39.474 [2024-11-27 12:58:05.642825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:d202e370 sqhd:1a40 p:0 m:0 dnr:0 00:19:39.474 [2024-11-27 12:58:05.642831] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 55958 doesn't match qid 00:19:39.474 [2024-11-27 12:58:05.642839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:d202e370 sqhd:1a40 p:0 m:0 dnr:0 00:19:39.474 [2024-11-27 12:58:05.642850] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.642859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.474 [2024-11-27 12:58:05.642879] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.474 [2024-11-27 12:58:05.642885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:19:39.474 [2024-11-27 12:58:05.642894] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.642901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.474 [2024-11-27 12:58:05.642907] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.642927] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.474 [2024-11-27 12:58:05.642933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:39.474 [2024-11-27 12:58:05.642939] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:19:39.474 [2024-11-27 12:58:05.642945] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:19:39.474 [2024-11-27 12:58:05.642951] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.642960] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.642967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.474 [2024-11-27 12:58:05.642989] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.474 [2024-11-27 12:58:05.642995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:39.474 [2024-11-27 12:58:05.643001] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643010] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.474 [2024-11-27 12:58:05.643034] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.474 [2024-11-27 12:58:05.643040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:39.474 [2024-11-27 12:58:05.643046] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643057] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.474 [2024-11-27 12:58:05.643085] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.474 [2024-11-27 12:58:05.643090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:39.474 [2024-11-27 12:58:05.643097] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643106] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.474 [2024-11-27 12:58:05.643137] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.474 [2024-11-27 12:58:05.643142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:39.474 [2024-11-27 12:58:05.643149] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643158] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.474 [2024-11-27 12:58:05.643185] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.474 [2024-11-27 12:58:05.643191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:39.474 [2024-11-27 12:58:05.643198] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643206] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.474 [2024-11-27 12:58:05.643232] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.474 [2024-11-27 12:58:05.643238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:39.474 [2024-11-27 12:58:05.643244] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643253] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.474 [2024-11-27 12:58:05.643278] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.474 [2024-11-27 12:58:05.643284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:39.474 [2024-11-27 12:58:05.643291] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643300] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.474 [2024-11-27 12:58:05.643333] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.474 [2024-11-27 12:58:05.643339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:39.474 [2024-11-27 12:58:05.643346] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643355] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.474 [2024-11-27 12:58:05.643382] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.474 [2024-11-27 12:58:05.643387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:39.474 [2024-11-27 12:58:05.643394] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643402] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.474 [2024-11-27 12:58:05.643431] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.474 [2024-11-27 12:58:05.643437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:39.474 [2024-11-27 12:58:05.643443] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643452] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.474 [2024-11-27 12:58:05.643459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.643482] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.643488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:39.475 [2024-11-27 12:58:05.643495] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643505] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.643535] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.643540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:39.475 [2024-11-27 12:58:05.643547] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643555] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.643583] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.643588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:39.475 [2024-11-27 12:58:05.643595] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643604] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.643640] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.643646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:39.475 [2024-11-27 12:58:05.643653] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643663] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.643691] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.643696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:39.475 [2024-11-27 12:58:05.643703] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643711] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.643738] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.643744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:39.475 [2024-11-27 12:58:05.643751] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643760] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.643789] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.643794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:39.475 [2024-11-27 12:58:05.643801] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643809] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.643832] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.643838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:39.475 [2024-11-27 12:58:05.643844] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643853] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.643879] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.643885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:39.475 [2024-11-27 12:58:05.643892] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643902] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.643926] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.643931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:39.475 [2024-11-27 12:58:05.643938] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643946] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.643971] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.643977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:39.475 [2024-11-27 12:58:05.643984] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.643993] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.644001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.644023] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.644029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:39.475 [2024-11-27 12:58:05.644035] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.644044] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.644052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.644073] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.644079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:19:39.475 [2024-11-27 12:58:05.644086] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.644095] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.644103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.644119] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.644125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:19:39.475 [2024-11-27 12:58:05.644132] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.644140] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.644148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.644164] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.644170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:19:39.475 [2024-11-27 12:58:05.644177] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.644187] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.644195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.644215] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.644221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:19:39.475 [2024-11-27 12:58:05.644228] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.644236] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.475 [2024-11-27 12:58:05.644244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.475 [2024-11-27 12:58:05.644259] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.475 [2024-11-27 12:58:05.644265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.644271] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644280] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.644311] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.644316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.644322] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644331] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.644360] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.644365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.644371] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644380] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.644407] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.644412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.644419] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644428] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.644456] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.644462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.644469] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644478] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.644503] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.644509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.644515] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644524] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.644547] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.644552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.644559] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644567] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.644593] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.644598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.644604] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644617] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.644646] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.644652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.644658] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644667] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.644694] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.644699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.644705] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644714] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.644741] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.644746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.644754] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644763] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.644790] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.644795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.644802] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644811] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.644834] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.644839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.644845] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644854] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.644876] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.644881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.644887] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644896] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.644919] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.644925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.644931] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644940] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.644965] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.644970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.644976] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644985] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.644993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.645016] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.645023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:39.476 [2024-11-27 12:58:05.645029] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.645038] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.476 [2024-11-27 12:58:05.645045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.476 [2024-11-27 12:58:05.645063] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.476 [2024-11-27 12:58:05.645068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:39.477 [2024-11-27 12:58:05.645074] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645083] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.477 [2024-11-27 12:58:05.645108] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.477 [2024-11-27 12:58:05.645114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:39.477 [2024-11-27 12:58:05.645120] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645129] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.477 [2024-11-27 12:58:05.645158] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.477 [2024-11-27 12:58:05.645163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:39.477 [2024-11-27 12:58:05.645169] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645178] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.477 [2024-11-27 12:58:05.645200] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.477 [2024-11-27 12:58:05.645205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:39.477 [2024-11-27 12:58:05.645211] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645220] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.477 [2024-11-27 12:58:05.645245] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.477 [2024-11-27 12:58:05.645250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:39.477 [2024-11-27 12:58:05.645257] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645265] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.477 [2024-11-27 12:58:05.645296] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.477 [2024-11-27 12:58:05.645301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:39.477 [2024-11-27 12:58:05.645307] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645316] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.477 [2024-11-27 12:58:05.645343] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.477 [2024-11-27 12:58:05.645349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:39.477 [2024-11-27 12:58:05.645355] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645364] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.477 [2024-11-27 12:58:05.645390] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.477 [2024-11-27 12:58:05.645396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:39.477 [2024-11-27 12:58:05.645402] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645411] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.477 [2024-11-27 12:58:05.645438] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.477 [2024-11-27 12:58:05.645443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:39.477 [2024-11-27 12:58:05.645450] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645458] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.477 [2024-11-27 12:58:05.645485] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.477 [2024-11-27 12:58:05.645491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:39.477 [2024-11-27 12:58:05.645497] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645506] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.477 [2024-11-27 12:58:05.645531] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.477 [2024-11-27 12:58:05.645536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:39.477 [2024-11-27 12:58:05.645543] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645551] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.477 [2024-11-27 12:58:05.645576] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.477 [2024-11-27 12:58:05.645581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:19:39.477 [2024-11-27 12:58:05.645588] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645596] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.645604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.477 [2024-11-27 12:58:05.649619] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.477 [2024-11-27 12:58:05.649625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:19:39.477 [2024-11-27 12:58:05.649631] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.649640] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.649648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.477 [2024-11-27 12:58:05.649667] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.477 [2024-11-27 12:58:05.649673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000a p:0 m:0 dnr:0 00:19:39.477 [2024-11-27 12:58:05.649679] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.649686] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:19:39.477 128 00:19:39.477 Transport Service Identifier: 4420 00:19:39.477 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:39.477 Transport Address: 192.168.100.8 00:19:39.477 Transport Specific Address Subtype - RDMA 00:19:39.477 RDMA QP Service Type: 1 (Reliable Connected) 00:19:39.477 RDMA Provider Type: 1 (No provider specified) 00:19:39.477 RDMA CM Service: 1 (RDMA_CM) 00:19:39.477 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:39.477 [2024-11-27 12:58:05.721907] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:19:39.477 [2024-11-27 12:58:05.721948] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid25885 ] 00:19:39.477 [2024-11-27 12:58:05.783923] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:19:39.477 [2024-11-27 12:58:05.783998] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:19:39.477 [2024-11-27 12:58:05.784012] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:19:39.477 [2024-11-27 12:58:05.784017] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:19:39.477 [2024-11-27 12:58:05.784043] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:19:39.477 [2024-11-27 12:58:05.791995] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:19:39.477 [2024-11-27 12:58:05.806729] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:39.477 [2024-11-27 12:58:05.806746] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:19:39.477 [2024-11-27 12:58:05.806754] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.806761] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.806767] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.806774] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.806780] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.806786] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180300 00:19:39.477 [2024-11-27 12:58:05.806792] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806798] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806804] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806810] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806816] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806823] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806829] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806835] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806841] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806847] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806853] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806859] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806865] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806872] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806878] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806884] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806890] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806896] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806902] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806908] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806914] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806920] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806927] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806933] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806943] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806949] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:19:39.478 [2024-11-27 12:58:05.806954] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:39.478 [2024-11-27 12:58:05.806959] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:19:39.478 [2024-11-27 12:58:05.806975] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.806987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x180300 00:19:39.478 [2024-11-27 12:58:05.812612] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.478 [2024-11-27 12:58:05.812622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:39.478 [2024-11-27 12:58:05.812629] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.812639] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:39.478 [2024-11-27 12:58:05.812646] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:19:39.478 [2024-11-27 12:58:05.812653] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:19:39.478 [2024-11-27 12:58:05.812665] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.812674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.478 [2024-11-27 12:58:05.812691] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.478 [2024-11-27 12:58:05.812697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:19:39.478 [2024-11-27 12:58:05.812703] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:19:39.478 [2024-11-27 12:58:05.812710] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.812716] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:19:39.478 [2024-11-27 12:58:05.812724] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.812732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.478 [2024-11-27 12:58:05.812746] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.478 [2024-11-27 12:58:05.812752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:19:39.478 [2024-11-27 12:58:05.812759] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:19:39.478 [2024-11-27 12:58:05.812765] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.812772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:19:39.478 [2024-11-27 12:58:05.812780] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.812788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.478 [2024-11-27 12:58:05.812804] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.478 [2024-11-27 12:58:05.812812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:39.478 [2024-11-27 12:58:05.812818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:39.478 [2024-11-27 12:58:05.812824] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.812833] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.812841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.478 [2024-11-27 12:58:05.812863] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.478 [2024-11-27 12:58:05.812868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:39.478 [2024-11-27 12:58:05.812875] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:19:39.478 [2024-11-27 12:58:05.812881] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:19:39.478 [2024-11-27 12:58:05.812887] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.812893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:39.478 [2024-11-27 12:58:05.813002] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:19:39.478 [2024-11-27 12:58:05.813008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:39.478 [2024-11-27 12:58:05.813017] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.813024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.478 [2024-11-27 12:58:05.813042] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.478 [2024-11-27 12:58:05.813048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:39.478 [2024-11-27 12:58:05.813054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:39.478 [2024-11-27 12:58:05.813061] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.813069] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.813077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.478 [2024-11-27 12:58:05.813095] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.478 [2024-11-27 12:58:05.813101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:39.478 [2024-11-27 12:58:05.813107] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:39.478 [2024-11-27 12:58:05.813113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:19:39.478 [2024-11-27 12:58:05.813119] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.813126] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:19:39.478 [2024-11-27 12:58:05.813134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:19:39.478 [2024-11-27 12:58:05.813145] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.478 [2024-11-27 12:58:05.813153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180300 00:19:39.478 [2024-11-27 12:58:05.813196] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.478 [2024-11-27 12:58:05.813202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:39.478 [2024-11-27 12:58:05.813210] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:19:39.478 [2024-11-27 12:58:05.813217] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:19:39.478 [2024-11-27 12:58:05.813222] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:19:39.478 [2024-11-27 12:58:05.813229] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:19:39.478 [2024-11-27 12:58:05.813235] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:19:39.478 [2024-11-27 12:58:05.813241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813247] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813262] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.479 [2024-11-27 12:58:05.813292] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.479 [2024-11-27 12:58:05.813298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:39.479 [2024-11-27 12:58:05.813306] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.479 [2024-11-27 12:58:05.813320] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.479 [2024-11-27 12:58:05.813334] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.479 [2024-11-27 12:58:05.813348] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.479 [2024-11-27 12:58:05.813361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813367] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813384] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813392] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.479 [2024-11-27 12:58:05.813411] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.479 [2024-11-27 12:58:05.813417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:19:39.479 [2024-11-27 12:58:05.813423] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:19:39.479 [2024-11-27 12:58:05.813429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813435] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813443] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813457] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.479 [2024-11-27 12:58:05.813492] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.479 [2024-11-27 12:58:05.813498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:19:39.479 [2024-11-27 12:58:05.813550] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813556] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813574] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x180300 00:19:39.479 [2024-11-27 12:58:05.813604] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.479 [2024-11-27 12:58:05.813614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:39.479 [2024-11-27 12:58:05.813623] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:19:39.479 [2024-11-27 12:58:05.813636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813643] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813651] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813659] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180300 00:19:39.479 [2024-11-27 12:58:05.813705] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.479 [2024-11-27 12:58:05.813711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:39.479 [2024-11-27 12:58:05.813726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813732] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813749] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180300 00:19:39.479 [2024-11-27 12:58:05.813782] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.479 [2024-11-27 12:58:05.813788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:39.479 [2024-11-27 12:58:05.813797] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813803] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813810] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813845] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:19:39.479 [2024-11-27 12:58:05.813851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:19:39.479 [2024-11-27 12:58:05.813857] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:19:39.479 [2024-11-27 12:58:05.813871] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.479 [2024-11-27 12:58:05.813886] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.479 [2024-11-27 12:58:05.813904] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.479 [2024-11-27 12:58:05.813910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:39.479 [2024-11-27 12:58:05.813916] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813923] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.479 [2024-11-27 12:58:05.813930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:39.479 [2024-11-27 12:58:05.813936] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813945] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.479 [2024-11-27 12:58:05.813970] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.479 [2024-11-27 12:58:05.813976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:39.479 [2024-11-27 12:58:05.813982] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813991] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.813999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.479 [2024-11-27 12:58:05.814019] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.479 [2024-11-27 12:58:05.814025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:39.479 [2024-11-27 12:58:05.814031] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.814040] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180300 00:19:39.479 [2024-11-27 12:58:05.814048] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.480 [2024-11-27 12:58:05.814068] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.480 [2024-11-27 12:58:05.814074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:19:39.480 [2024-11-27 12:58:05.814080] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180300 00:19:39.480 [2024-11-27 12:58:05.814094] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180300 00:19:39.480 [2024-11-27 12:58:05.814102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x180300 00:19:39.480 [2024-11-27 12:58:05.814110] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180300 00:19:39.480 [2024-11-27 12:58:05.814118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x180300 00:19:39.480 [2024-11-27 12:58:05.814126] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x180300 00:19:39.480 [2024-11-27 12:58:05.814134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x180300 00:19:39.480 [2024-11-27 12:58:05.814142] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x180300 00:19:39.480 [2024-11-27 12:58:05.814150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x180300 00:19:39.480 [2024-11-27 12:58:05.814159] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.480 [2024-11-27 12:58:05.814166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:39.480 [2024-11-27 12:58:05.814177] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180300 00:19:39.480 [2024-11-27 12:58:05.814184] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.480 [2024-11-27 12:58:05.814189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:39.480 [2024-11-27 12:58:05.814200] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180300 00:19:39.480 [2024-11-27 12:58:05.814206] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.480 [2024-11-27 12:58:05.814212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:39.480 [2024-11-27 12:58:05.814219] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180300 00:19:39.480 [2024-11-27 12:58:05.814225] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.480 [2024-11-27 12:58:05.814230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:39.480 [2024-11-27 12:58:05.814240] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180300 00:19:39.480 ===================================================== 00:19:39.480 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:39.480 ===================================================== 00:19:39.480 Controller Capabilities/Features 00:19:39.480 ================================ 00:19:39.480 Vendor ID: 8086 00:19:39.480 Subsystem Vendor ID: 8086 00:19:39.480 Serial Number: SPDK00000000000001 00:19:39.480 Model Number: SPDK bdev Controller 00:19:39.480 Firmware Version: 25.01 00:19:39.480 Recommended Arb Burst: 6 00:19:39.480 IEEE OUI Identifier: e4 d2 5c 00:19:39.480 Multi-path I/O 00:19:39.480 May have multiple subsystem ports: Yes 00:19:39.480 May have multiple controllers: Yes 00:19:39.480 Associated with SR-IOV VF: No 00:19:39.480 Max Data Transfer Size: 131072 00:19:39.480 Max Number of Namespaces: 32 00:19:39.480 Max Number of I/O Queues: 127 00:19:39.480 NVMe Specification Version (VS): 1.3 00:19:39.480 NVMe Specification Version (Identify): 1.3 00:19:39.480 Maximum Queue Entries: 128 00:19:39.480 Contiguous Queues Required: Yes 00:19:39.480 Arbitration Mechanisms Supported 00:19:39.480 Weighted Round Robin: Not Supported 00:19:39.480 Vendor Specific: Not Supported 00:19:39.480 Reset Timeout: 15000 ms 00:19:39.480 Doorbell Stride: 4 bytes 00:19:39.480 NVM Subsystem Reset: Not Supported 00:19:39.480 Command Sets Supported 00:19:39.480 NVM Command Set: Supported 00:19:39.480 Boot Partition: Not Supported 00:19:39.480 Memory Page Size Minimum: 4096 bytes 00:19:39.480 Memory Page Size Maximum: 4096 bytes 00:19:39.480 Persistent Memory Region: Not Supported 00:19:39.480 Optional Asynchronous Events Supported 00:19:39.480 Namespace Attribute Notices: Supported 00:19:39.480 Firmware Activation Notices: Not Supported 00:19:39.480 ANA Change Notices: Not Supported 00:19:39.480 PLE Aggregate Log Change Notices: Not Supported 00:19:39.480 LBA Status Info Alert Notices: Not Supported 00:19:39.480 EGE Aggregate Log Change Notices: Not Supported 00:19:39.480 Normal NVM Subsystem Shutdown event: Not Supported 00:19:39.480 Zone Descriptor Change Notices: Not Supported 00:19:39.480 Discovery Log Change Notices: Not Supported 00:19:39.480 Controller Attributes 00:19:39.480 128-bit Host Identifier: Supported 00:19:39.480 Non-Operational Permissive Mode: Not Supported 00:19:39.480 NVM Sets: Not Supported 00:19:39.480 Read Recovery Levels: Not Supported 00:19:39.480 Endurance Groups: Not Supported 00:19:39.480 Predictable Latency Mode: Not Supported 00:19:39.480 Traffic Based Keep ALive: Not Supported 00:19:39.480 Namespace Granularity: Not Supported 00:19:39.480 SQ Associations: Not Supported 00:19:39.480 UUID List: Not Supported 00:19:39.480 Multi-Domain Subsystem: Not Supported 00:19:39.480 Fixed Capacity Management: Not Supported 00:19:39.480 Variable Capacity Management: Not Supported 00:19:39.480 Delete Endurance Group: Not Supported 00:19:39.480 Delete NVM Set: Not Supported 00:19:39.480 Extended LBA Formats Supported: Not Supported 00:19:39.480 Flexible Data Placement Supported: Not Supported 00:19:39.480 00:19:39.480 Controller Memory Buffer Support 00:19:39.480 ================================ 00:19:39.480 Supported: No 00:19:39.480 00:19:39.480 Persistent Memory Region Support 00:19:39.480 ================================ 00:19:39.480 Supported: No 00:19:39.480 00:19:39.480 Admin Command Set Attributes 00:19:39.480 ============================ 00:19:39.480 Security Send/Receive: Not Supported 00:19:39.480 Format NVM: Not Supported 00:19:39.480 Firmware Activate/Download: Not Supported 00:19:39.480 Namespace Management: Not Supported 00:19:39.480 Device Self-Test: Not Supported 00:19:39.480 Directives: Not Supported 00:19:39.480 NVMe-MI: Not Supported 00:19:39.480 Virtualization Management: Not Supported 00:19:39.480 Doorbell Buffer Config: Not Supported 00:19:39.480 Get LBA Status Capability: Not Supported 00:19:39.480 Command & Feature Lockdown Capability: Not Supported 00:19:39.480 Abort Command Limit: 4 00:19:39.480 Async Event Request Limit: 4 00:19:39.480 Number of Firmware Slots: N/A 00:19:39.480 Firmware Slot 1 Read-Only: N/A 00:19:39.480 Firmware Activation Without Reset: N/A 00:19:39.480 Multiple Update Detection Support: N/A 00:19:39.480 Firmware Update Granularity: No Information Provided 00:19:39.480 Per-Namespace SMART Log: No 00:19:39.480 Asymmetric Namespace Access Log Page: Not Supported 00:19:39.480 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:39.480 Command Effects Log Page: Supported 00:19:39.480 Get Log Page Extended Data: Supported 00:19:39.480 Telemetry Log Pages: Not Supported 00:19:39.480 Persistent Event Log Pages: Not Supported 00:19:39.480 Supported Log Pages Log Page: May Support 00:19:39.480 Commands Supported & Effects Log Page: Not Supported 00:19:39.480 Feature Identifiers & Effects Log Page:May Support 00:19:39.480 NVMe-MI Commands & Effects Log Page: May Support 00:19:39.480 Data Area 4 for Telemetry Log: Not Supported 00:19:39.480 Error Log Page Entries Supported: 128 00:19:39.480 Keep Alive: Supported 00:19:39.480 Keep Alive Granularity: 10000 ms 00:19:39.480 00:19:39.480 NVM Command Set Attributes 00:19:39.480 ========================== 00:19:39.480 Submission Queue Entry Size 00:19:39.480 Max: 64 00:19:39.480 Min: 64 00:19:39.480 Completion Queue Entry Size 00:19:39.480 Max: 16 00:19:39.480 Min: 16 00:19:39.480 Number of Namespaces: 32 00:19:39.480 Compare Command: Supported 00:19:39.480 Write Uncorrectable Command: Not Supported 00:19:39.480 Dataset Management Command: Supported 00:19:39.480 Write Zeroes Command: Supported 00:19:39.480 Set Features Save Field: Not Supported 00:19:39.480 Reservations: Supported 00:19:39.481 Timestamp: Not Supported 00:19:39.481 Copy: Supported 00:19:39.481 Volatile Write Cache: Present 00:19:39.481 Atomic Write Unit (Normal): 1 00:19:39.481 Atomic Write Unit (PFail): 1 00:19:39.481 Atomic Compare & Write Unit: 1 00:19:39.481 Fused Compare & Write: Supported 00:19:39.481 Scatter-Gather List 00:19:39.481 SGL Command Set: Supported 00:19:39.481 SGL Keyed: Supported 00:19:39.481 SGL Bit Bucket Descriptor: Not Supported 00:19:39.481 SGL Metadata Pointer: Not Supported 00:19:39.481 Oversized SGL: Not Supported 00:19:39.481 SGL Metadata Address: Not Supported 00:19:39.481 SGL Offset: Supported 00:19:39.481 Transport SGL Data Block: Not Supported 00:19:39.481 Replay Protected Memory Block: Not Supported 00:19:39.481 00:19:39.481 Firmware Slot Information 00:19:39.481 ========================= 00:19:39.481 Active slot: 1 00:19:39.481 Slot 1 Firmware Revision: 25.01 00:19:39.481 00:19:39.481 00:19:39.481 Commands Supported and Effects 00:19:39.481 ============================== 00:19:39.481 Admin Commands 00:19:39.481 -------------- 00:19:39.481 Get Log Page (02h): Supported 00:19:39.481 Identify (06h): Supported 00:19:39.481 Abort (08h): Supported 00:19:39.481 Set Features (09h): Supported 00:19:39.481 Get Features (0Ah): Supported 00:19:39.481 Asynchronous Event Request (0Ch): Supported 00:19:39.481 Keep Alive (18h): Supported 00:19:39.481 I/O Commands 00:19:39.481 ------------ 00:19:39.481 Flush (00h): Supported LBA-Change 00:19:39.481 Write (01h): Supported LBA-Change 00:19:39.481 Read (02h): Supported 00:19:39.481 Compare (05h): Supported 00:19:39.481 Write Zeroes (08h): Supported LBA-Change 00:19:39.481 Dataset Management (09h): Supported LBA-Change 00:19:39.481 Copy (19h): Supported LBA-Change 00:19:39.481 00:19:39.481 Error Log 00:19:39.481 ========= 00:19:39.481 00:19:39.481 Arbitration 00:19:39.481 =========== 00:19:39.481 Arbitration Burst: 1 00:19:39.481 00:19:39.481 Power Management 00:19:39.481 ================ 00:19:39.481 Number of Power States: 1 00:19:39.481 Current Power State: Power State #0 00:19:39.481 Power State #0: 00:19:39.481 Max Power: 0.00 W 00:19:39.481 Non-Operational State: Operational 00:19:39.481 Entry Latency: Not Reported 00:19:39.481 Exit Latency: Not Reported 00:19:39.481 Relative Read Throughput: 0 00:19:39.481 Relative Read Latency: 0 00:19:39.481 Relative Write Throughput: 0 00:19:39.481 Relative Write Latency: 0 00:19:39.481 Idle Power: Not Reported 00:19:39.481 Active Power: Not Reported 00:19:39.481 Non-Operational Permissive Mode: Not Supported 00:19:39.481 00:19:39.481 Health Information 00:19:39.481 ================== 00:19:39.481 Critical Warnings: 00:19:39.481 Available Spare Space: OK 00:19:39.481 Temperature: OK 00:19:39.481 Device Reliability: OK 00:19:39.481 Read Only: No 00:19:39.481 Volatile Memory Backup: OK 00:19:39.481 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:39.481 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:39.481 Available Spare: 0% 00:19:39.481 Available Spare Threshold: 0% 00:19:39.481 Life Percentage [2024-11-27 12:58:05.814318] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.481 [2024-11-27 12:58:05.814350] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.481 [2024-11-27 12:58:05.814356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:39.481 [2024-11-27 12:58:05.814362] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814389] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:19:39.481 [2024-11-27 12:58:05.814399] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 18741 doesn't match qid 00:19:39.481 [2024-11-27 12:58:05.814413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32580 cdw0:2e5fb290 sqhd:8a40 p:0 m:0 dnr:0 00:19:39.481 [2024-11-27 12:58:05.814419] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 18741 doesn't match qid 00:19:39.481 [2024-11-27 12:58:05.814427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32580 cdw0:2e5fb290 sqhd:8a40 p:0 m:0 dnr:0 00:19:39.481 [2024-11-27 12:58:05.814434] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 18741 doesn't match qid 00:19:39.481 [2024-11-27 12:58:05.814441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32580 cdw0:2e5fb290 sqhd:8a40 p:0 m:0 dnr:0 00:19:39.481 [2024-11-27 12:58:05.814448] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 18741 doesn't match qid 00:19:39.481 [2024-11-27 12:58:05.814455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32580 cdw0:2e5fb290 sqhd:8a40 p:0 m:0 dnr:0 00:19:39.481 [2024-11-27 12:58:05.814464] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.481 [2024-11-27 12:58:05.814488] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.481 [2024-11-27 12:58:05.814494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:19:39.481 [2024-11-27 12:58:05.814502] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.481 [2024-11-27 12:58:05.814518] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814536] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.481 [2024-11-27 12:58:05.814542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:39.481 [2024-11-27 12:58:05.814548] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:19:39.481 [2024-11-27 12:58:05.814554] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:19:39.481 [2024-11-27 12:58:05.814560] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814569] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.481 [2024-11-27 12:58:05.814597] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.481 [2024-11-27 12:58:05.814603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:39.481 [2024-11-27 12:58:05.814614] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814623] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.481 [2024-11-27 12:58:05.814653] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.481 [2024-11-27 12:58:05.814658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:39.481 [2024-11-27 12:58:05.814665] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814674] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.481 [2024-11-27 12:58:05.814697] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.481 [2024-11-27 12:58:05.814703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:39.481 [2024-11-27 12:58:05.814710] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814719] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.481 [2024-11-27 12:58:05.814748] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.481 [2024-11-27 12:58:05.814754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:39.481 [2024-11-27 12:58:05.814760] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814769] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.481 [2024-11-27 12:58:05.814799] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.481 [2024-11-27 12:58:05.814806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:39.481 [2024-11-27 12:58:05.814813] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814822] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.481 [2024-11-27 12:58:05.814846] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.481 [2024-11-27 12:58:05.814852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:39.481 [2024-11-27 12:58:05.814858] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180300 00:19:39.481 [2024-11-27 12:58:05.814867] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.814875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.814893] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.814899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.814905] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.814914] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.814922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.814937] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.814943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.814950] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.814959] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.814967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.814986] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.814992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.814998] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815007] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.815036] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.815042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.815048] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815057] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.815080] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.815086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.815092] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815101] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.815125] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.815130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.815136] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815145] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.815171] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.815176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.815183] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815191] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.815217] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.815223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.815229] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815238] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.815267] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.815273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.815279] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815288] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.815317] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.815323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.815329] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815338] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.815363] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.815368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.815375] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815383] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.815413] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.815418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.815425] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815433] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.815459] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.815464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.815471] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815479] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.815501] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.815507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.815513] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815522] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.815549] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.815555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.815561] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815570] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.815596] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.815601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.815610] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815620] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.815645] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.815650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.815657] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815665] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.482 [2024-11-27 12:58:05.815695] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.482 [2024-11-27 12:58:05.815700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:39.482 [2024-11-27 12:58:05.815707] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815715] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.482 [2024-11-27 12:58:05.815723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.815739] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.815744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.815751] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.815759] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.815767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.815787] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.815792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.815799] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.815807] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.815815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.815838] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.815844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.815850] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.815859] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.815867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.815890] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.815896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.815902] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.815911] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.815920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.815939] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.815945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.815951] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.815960] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.815968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.815989] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.815995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.816001] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816010] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.816037] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.816043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.816049] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816058] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.816085] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.816090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.816097] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816105] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.816131] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.816136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.816143] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816151] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.816173] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.816178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.816185] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816195] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.816226] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.816231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.816238] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816247] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.816270] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.816276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.816282] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816291] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.816316] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.816322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.816328] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816337] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.816360] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.816366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.816372] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816381] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.816410] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.816416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.816422] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816431] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.816460] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.816465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.816472] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816482] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.816505] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.816511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.816517] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816526] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.816555] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.483 [2024-11-27 12:58:05.816561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:39.483 [2024-11-27 12:58:05.816567] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816576] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.483 [2024-11-27 12:58:05.816583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.483 [2024-11-27 12:58:05.816603] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.484 [2024-11-27 12:58:05.820614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:39.484 [2024-11-27 12:58:05.820622] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180300 00:19:39.484 [2024-11-27 12:58:05.820631] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180300 00:19:39.484 [2024-11-27 12:58:05.820639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:39.484 [2024-11-27 12:58:05.820661] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:39.484 [2024-11-27 12:58:05.820666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0006 p:0 m:0 dnr:0 00:19:39.484 [2024-11-27 12:58:05.820673] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180300 00:19:39.484 [2024-11-27 12:58:05.820680] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:19:39.743 Used: 0% 00:19:39.743 Data Units Read: 0 00:19:39.743 Data Units Written: 0 00:19:39.743 Host Read Commands: 0 00:19:39.743 Host Write Commands: 0 00:19:39.743 Controller Busy Time: 0 minutes 00:19:39.743 Power Cycles: 0 00:19:39.743 Power On Hours: 0 hours 00:19:39.743 Unsafe Shutdowns: 0 00:19:39.743 Unrecoverable Media Errors: 0 00:19:39.743 Lifetime Error Log Entries: 0 00:19:39.743 Warning Temperature Time: 0 minutes 00:19:39.743 Critical Temperature Time: 0 minutes 00:19:39.743 00:19:39.743 Number of Queues 00:19:39.743 ================ 00:19:39.743 Number of I/O Submission Queues: 127 00:19:39.743 Number of I/O Completion Queues: 127 00:19:39.743 00:19:39.743 Active Namespaces 00:19:39.743 ================= 00:19:39.743 Namespace ID:1 00:19:39.743 Error Recovery Timeout: Unlimited 00:19:39.743 Command Set Identifier: NVM (00h) 00:19:39.743 Deallocate: Supported 00:19:39.743 Deallocated/Unwritten Error: Not Supported 00:19:39.743 Deallocated Read Value: Unknown 00:19:39.743 Deallocate in Write Zeroes: Not Supported 00:19:39.743 Deallocated Guard Field: 0xFFFF 00:19:39.743 Flush: Supported 00:19:39.743 Reservation: Supported 00:19:39.743 Namespace Sharing Capabilities: Multiple Controllers 00:19:39.743 Size (in LBAs): 131072 (0GiB) 00:19:39.743 Capacity (in LBAs): 131072 (0GiB) 00:19:39.743 Utilization (in LBAs): 131072 (0GiB) 00:19:39.743 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:39.743 EUI64: ABCDEF0123456789 00:19:39.743 UUID: e4a521a9-18fb-407e-91ea-301525459894 00:19:39.743 Thin Provisioning: Not Supported 00:19:39.743 Per-NS Atomic Units: Yes 00:19:39.743 Atomic Boundary Size (Normal): 0 00:19:39.743 Atomic Boundary Size (PFail): 0 00:19:39.743 Atomic Boundary Offset: 0 00:19:39.743 Maximum Single Source Range Length: 65535 00:19:39.743 Maximum Copy Length: 65535 00:19:39.743 Maximum Source Range Count: 1 00:19:39.743 NGUID/EUI64 Never Reused: No 00:19:39.743 Namespace Write Protected: No 00:19:39.743 Number of LBA Formats: 1 00:19:39.743 Current LBA Format: LBA Format #00 00:19:39.743 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:39.743 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:39.743 rmmod nvme_rdma 00:19:39.743 rmmod nvme_fabrics 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 25587 ']' 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 25587 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 25587 ']' 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 25587 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 25587 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 25587' 00:19:39.743 killing process with pid 25587 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 25587 00:19:39.743 12:58:05 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 25587 00:19:40.002 12:58:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:40.002 12:58:06 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:40.002 00:19:40.002 real 0m10.501s 00:19:40.002 user 0m9.371s 00:19:40.002 sys 0m6.847s 00:19:40.002 12:58:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.002 12:58:06 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:40.002 ************************************ 00:19:40.002 END TEST nvmf_identify 00:19:40.002 ************************************ 00:19:40.002 12:58:06 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:19:40.002 12:58:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:40.002 12:58:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.002 12:58:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.002 ************************************ 00:19:40.002 START TEST nvmf_perf 00:19:40.002 ************************************ 00:19:40.002 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:19:40.262 * Looking for test storage... 00:19:40.262 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:40.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.262 --rc genhtml_branch_coverage=1 00:19:40.262 --rc genhtml_function_coverage=1 00:19:40.262 --rc genhtml_legend=1 00:19:40.262 --rc geninfo_all_blocks=1 00:19:40.262 --rc geninfo_unexecuted_blocks=1 00:19:40.262 00:19:40.262 ' 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:40.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.262 --rc genhtml_branch_coverage=1 00:19:40.262 --rc genhtml_function_coverage=1 00:19:40.262 --rc genhtml_legend=1 00:19:40.262 --rc geninfo_all_blocks=1 00:19:40.262 --rc geninfo_unexecuted_blocks=1 00:19:40.262 00:19:40.262 ' 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:40.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.262 --rc genhtml_branch_coverage=1 00:19:40.262 --rc genhtml_function_coverage=1 00:19:40.262 --rc genhtml_legend=1 00:19:40.262 --rc geninfo_all_blocks=1 00:19:40.262 --rc geninfo_unexecuted_blocks=1 00:19:40.262 00:19:40.262 ' 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:40.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.262 --rc genhtml_branch_coverage=1 00:19:40.262 --rc genhtml_function_coverage=1 00:19:40.262 --rc genhtml_legend=1 00:19:40.262 --rc geninfo_all_blocks=1 00:19:40.262 --rc geninfo_unexecuted_blocks=1 00:19:40.262 00:19:40.262 ' 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.262 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:40.263 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:40.263 12:58:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:48.385 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:48.385 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:48.385 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:48.385 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:48.385 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:48.386 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:48.386 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:48.386 altname enp217s0f0np0 00:19:48.386 altname ens818f0np0 00:19:48.386 inet 192.168.100.8/24 scope global mlx_0_0 00:19:48.386 valid_lft forever preferred_lft forever 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:48.386 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:48.386 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:48.386 altname enp217s0f1np1 00:19:48.386 altname ens818f1np1 00:19:48.386 inet 192.168.100.9/24 scope global mlx_0_1 00:19:48.386 valid_lft forever preferred_lft forever 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:48.386 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:48.646 192.168.100.9' 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:48.646 192.168.100.9' 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:48.646 192.168.100.9' 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=29997 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 29997 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 29997 ']' 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.646 12:58:14 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:48.646 [2024-11-27 12:58:14.889595] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:19:48.646 [2024-11-27 12:58:14.889651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.646 [2024-11-27 12:58:14.978730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.646 [2024-11-27 12:58:15.018273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.646 [2024-11-27 12:58:15.018314] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.646 [2024-11-27 12:58:15.018323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.646 [2024-11-27 12:58:15.018331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.646 [2024-11-27 12:58:15.018338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.646 [2024-11-27 12:58:15.020076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.646 [2024-11-27 12:58:15.020171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.646 [2024-11-27 12:58:15.020268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.646 [2024-11-27 12:58:15.020270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.585 12:58:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.585 12:58:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:19:49.585 12:58:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:49.585 12:58:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.585 12:58:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:49.585 12:58:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.585 12:58:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:19:49.585 12:58:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:19:52.875 12:58:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:19:52.875 12:58:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:52.875 12:58:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:19:52.875 12:58:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:52.875 12:58:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:52.875 12:58:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:19:53.134 12:58:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:53.134 12:58:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:19:53.134 12:58:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:19:53.134 [2024-11-27 12:58:19.434275] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:19:53.134 [2024-11-27 12:58:19.455795] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x252e9f0/0x24043c0) succeed. 00:19:53.134 [2024-11-27 12:58:19.465220] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23ef480/0x2484080) succeed. 00:19:53.394 12:58:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:53.394 12:58:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:53.652 12:58:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:53.652 12:58:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:53.652 12:58:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:53.910 12:58:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:54.169 [2024-11-27 12:58:20.345010] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:54.169 12:58:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:54.427 12:58:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:19:54.427 12:58:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:19:54.427 12:58:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:54.427 12:58:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:19:55.803 Initializing NVMe Controllers 00:19:55.803 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:19:55.803 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:19:55.803 Initialization complete. Launching workers. 00:19:55.803 ======================================================== 00:19:55.803 Latency(us) 00:19:55.803 Device Information : IOPS MiB/s Average min max 00:19:55.803 PCIE (0000:d8:00.0) NSID 1 from core 0: 102016.55 398.50 313.08 42.45 4235.92 00:19:55.803 ======================================================== 00:19:55.803 Total : 102016.55 398.50 313.08 42.45 4235.92 00:19:55.803 00:19:55.803 12:58:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:59.089 Initializing NVMe Controllers 00:19:59.089 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:59.089 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:59.089 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:59.089 Initialization complete. Launching workers. 00:19:59.089 ======================================================== 00:19:59.089 Latency(us) 00:19:59.089 Device Information : IOPS MiB/s Average min max 00:19:59.089 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6620.99 25.86 150.68 48.50 4087.15 00:19:59.089 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5138.99 20.07 194.21 70.75 4107.46 00:19:59.089 ======================================================== 00:19:59.089 Total : 11759.99 45.94 169.70 48.50 4107.46 00:19:59.089 00:19:59.089 12:58:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:02.424 Initializing NVMe Controllers 00:20:02.424 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:02.424 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:02.424 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:02.424 Initialization complete. Launching workers. 00:20:02.424 ======================================================== 00:20:02.424 Latency(us) 00:20:02.424 Device Information : IOPS MiB/s Average min max 00:20:02.424 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18354.79 71.70 1742.29 505.52 8068.35 00:20:02.424 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4021.22 15.71 8014.75 7203.58 11168.48 00:20:02.424 ======================================================== 00:20:02.424 Total : 22376.01 87.41 2869.52 505.52 11168.48 00:20:02.424 00:20:02.424 12:58:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:20:02.424 12:58:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:06.743 Initializing NVMe Controllers 00:20:06.743 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:06.743 Controller IO queue size 128, less than required. 00:20:06.743 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:06.743 Controller IO queue size 128, less than required. 00:20:06.743 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:06.743 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:06.743 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:06.743 Initialization complete. Launching workers. 00:20:06.743 ======================================================== 00:20:06.743 Latency(us) 00:20:06.743 Device Information : IOPS MiB/s Average min max 00:20:06.743 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3925.00 981.25 32850.87 13646.30 87587.40 00:20:06.743 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3997.00 999.25 31564.34 14863.76 68915.09 00:20:06.743 ======================================================== 00:20:06.743 Total : 7922.00 1980.50 32201.76 13646.30 87587.40 00:20:06.743 00:20:06.743 12:58:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:20:07.311 No valid NVMe controllers or AIO or URING devices found 00:20:07.311 Initializing NVMe Controllers 00:20:07.311 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:07.311 Controller IO queue size 128, less than required. 00:20:07.311 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:07.311 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:07.311 Controller IO queue size 128, less than required. 00:20:07.311 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:07.311 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:07.311 WARNING: Some requested NVMe devices were skipped 00:20:07.311 12:58:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:20:11.512 Initializing NVMe Controllers 00:20:11.512 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.512 Controller IO queue size 128, less than required. 00:20:11.512 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:11.512 Controller IO queue size 128, less than required. 00:20:11.512 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:11.512 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:11.512 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:11.512 Initialization complete. Launching workers. 00:20:11.512 00:20:11.512 ==================== 00:20:11.512 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:11.512 RDMA transport: 00:20:11.512 dev name: mlx5_0 00:20:11.512 polls: 405544 00:20:11.512 idle_polls: 401871 00:20:11.512 completions: 44994 00:20:11.512 queued_requests: 1 00:20:11.512 total_send_wrs: 22497 00:20:11.512 send_doorbell_updates: 3420 00:20:11.512 total_recv_wrs: 22624 00:20:11.512 recv_doorbell_updates: 3426 00:20:11.512 --------------------------------- 00:20:11.512 00:20:11.512 ==================== 00:20:11.512 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:11.512 RDMA transport: 00:20:11.512 dev name: mlx5_0 00:20:11.512 polls: 404085 00:20:11.512 idle_polls: 403807 00:20:11.512 completions: 19962 00:20:11.512 queued_requests: 1 00:20:11.512 total_send_wrs: 9981 00:20:11.512 send_doorbell_updates: 255 00:20:11.512 total_recv_wrs: 10108 00:20:11.512 recv_doorbell_updates: 256 00:20:11.512 --------------------------------- 00:20:11.512 ======================================================== 00:20:11.512 Latency(us) 00:20:11.512 Device Information : IOPS MiB/s Average min max 00:20:11.512 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5615.28 1403.82 22826.08 11159.74 71234.92 00:20:11.512 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2491.13 622.78 51063.87 31688.38 79168.37 00:20:11.512 ======================================================== 00:20:11.512 Total : 8106.41 2026.60 31503.66 11159.74 79168.37 00:20:11.512 00:20:11.771 12:58:37 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:11.771 12:58:37 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.771 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:11.771 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:11.771 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:11.771 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:11.771 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:20:11.771 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:11.771 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:11.771 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:20:11.771 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:11.771 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:11.771 rmmod nvme_rdma 00:20:11.771 rmmod nvme_fabrics 00:20:12.030 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:12.030 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:20:12.030 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:20:12.030 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 29997 ']' 00:20:12.030 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 29997 00:20:12.030 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 29997 ']' 00:20:12.030 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 29997 00:20:12.030 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:20:12.030 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.030 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 29997 00:20:12.030 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.030 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.030 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 29997' 00:20:12.030 killing process with pid 29997 00:20:12.030 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 29997 00:20:12.030 12:58:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 29997 00:20:14.566 12:58:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:14.566 12:58:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:14.566 00:20:14.566 real 0m34.435s 00:20:14.566 user 1m45.643s 00:20:14.566 sys 0m7.895s 00:20:14.566 12:58:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.566 12:58:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:14.566 ************************************ 00:20:14.566 END TEST nvmf_perf 00:20:14.566 ************************************ 00:20:14.566 12:58:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:20:14.566 12:58:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:14.566 12:58:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.566 12:58:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.566 ************************************ 00:20:14.566 START TEST nvmf_fio_host 00:20:14.566 ************************************ 00:20:14.566 12:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:20:14.566 * Looking for test storage... 00:20:14.826 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:14.826 12:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:14.826 12:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:20:14.826 12:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:14.826 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:14.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.827 --rc genhtml_branch_coverage=1 00:20:14.827 --rc genhtml_function_coverage=1 00:20:14.827 --rc genhtml_legend=1 00:20:14.827 --rc geninfo_all_blocks=1 00:20:14.827 --rc geninfo_unexecuted_blocks=1 00:20:14.827 00:20:14.827 ' 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:14.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.827 --rc genhtml_branch_coverage=1 00:20:14.827 --rc genhtml_function_coverage=1 00:20:14.827 --rc genhtml_legend=1 00:20:14.827 --rc geninfo_all_blocks=1 00:20:14.827 --rc geninfo_unexecuted_blocks=1 00:20:14.827 00:20:14.827 ' 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:14.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.827 --rc genhtml_branch_coverage=1 00:20:14.827 --rc genhtml_function_coverage=1 00:20:14.827 --rc genhtml_legend=1 00:20:14.827 --rc geninfo_all_blocks=1 00:20:14.827 --rc geninfo_unexecuted_blocks=1 00:20:14.827 00:20:14.827 ' 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:14.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.827 --rc genhtml_branch_coverage=1 00:20:14.827 --rc genhtml_function_coverage=1 00:20:14.827 --rc genhtml_legend=1 00:20:14.827 --rc geninfo_all_blocks=1 00:20:14.827 --rc geninfo_unexecuted_blocks=1 00:20:14.827 00:20:14.827 ' 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:14.827 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.827 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.828 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:14.828 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:14.828 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:20:14.828 12:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:22.958 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:22.959 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:22.959 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:22.959 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:22.960 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:22.960 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:22.960 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:22.961 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:22.961 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:22.961 altname enp217s0f0np0 00:20:22.961 altname ens818f0np0 00:20:22.961 inet 192.168.100.8/24 scope global mlx_0_0 00:20:22.961 valid_lft forever preferred_lft forever 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:22.961 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:22.962 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:22.962 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:22.962 altname enp217s0f1np1 00:20:22.962 altname ens818f1np1 00:20:22.962 inet 192.168.100.9/24 scope global mlx_0_1 00:20:22.962 valid_lft forever preferred_lft forever 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:22.962 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:22.963 192.168.100.9' 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:22.963 192.168.100.9' 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:22.963 192.168.100.9' 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=38283 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 38283 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 38283 ']' 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.963 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.964 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.964 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.964 12:58:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.964 [2024-11-27 12:58:48.950009] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:20:22.964 [2024-11-27 12:58:48.950062] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.964 [2024-11-27 12:58:49.039867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.964 [2024-11-27 12:58:49.081037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.964 [2024-11-27 12:58:49.081076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.964 [2024-11-27 12:58:49.081086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.964 [2024-11-27 12:58:49.081094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.964 [2024-11-27 12:58:49.081101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.964 [2024-11-27 12:58:49.082817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.964 [2024-11-27 12:58:49.082909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.964 [2024-11-27 12:58:49.082998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.964 [2024-11-27 12:58:49.083000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.532 12:58:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.532 12:58:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:20:23.532 12:58:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:23.791 [2024-11-27 12:58:49.965395] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16a2df0/0x16a72e0) succeed. 00:20:23.791 [2024-11-27 12:58:49.974643] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16a4480/0x16e8980) succeed. 00:20:23.791 12:58:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:23.791 12:58:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:23.791 12:58:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.050 12:58:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:24.050 Malloc1 00:20:24.050 12:58:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:24.309 12:58:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:24.568 12:58:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:24.826 [2024-11-27 12:58:50.977536] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:24.826 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:25.086 12:58:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:20:25.353 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:25.353 fio-3.35 00:20:25.353 Starting 1 thread 00:20:27.911 00:20:27.911 test: (groupid=0, jobs=1): err= 0: pid=38968: Wed Nov 27 12:58:53 2024 00:20:27.911 read: IOPS=18.0k, BW=70.1MiB/s (73.5MB/s)(141MiB/2004msec) 00:20:27.911 slat (nsec): min=1346, max=34475, avg=1478.94, stdev=460.47 00:20:27.911 clat (usec): min=2055, max=6495, avg=3538.85, stdev=79.05 00:20:27.911 lat (usec): min=2077, max=6496, avg=3540.33, stdev=78.96 00:20:27.911 clat percentiles (usec): 00:20:27.911 | 1.00th=[ 3490], 5.00th=[ 3523], 10.00th=[ 3523], 20.00th=[ 3523], 00:20:27.911 | 30.00th=[ 3523], 40.00th=[ 3523], 50.00th=[ 3523], 60.00th=[ 3556], 00:20:27.911 | 70.00th=[ 3556], 80.00th=[ 3556], 90.00th=[ 3556], 95.00th=[ 3556], 00:20:27.911 | 99.00th=[ 3589], 99.50th=[ 3589], 99.90th=[ 4293], 99.95th=[ 5932], 00:20:27.911 | 99.99th=[ 6456] 00:20:27.911 bw ( KiB/s): min=70408, max=72584, per=100.00%, avg=71840.00, stdev=973.46, samples=4 00:20:27.911 iops : min=17602, max=18146, avg=17960.00, stdev=243.37, samples=4 00:20:27.911 write: IOPS=18.0k, BW=70.2MiB/s (73.6MB/s)(141MiB/2004msec); 0 zone resets 00:20:27.911 slat (nsec): min=1380, max=19099, avg=1554.78, stdev=475.74 00:20:27.911 clat (usec): min=2089, max=6470, avg=3537.12, stdev=75.36 00:20:27.911 lat (usec): min=2100, max=6471, avg=3538.68, stdev=75.29 00:20:27.911 clat percentiles (usec): 00:20:27.911 | 1.00th=[ 3490], 5.00th=[ 3523], 10.00th=[ 3523], 20.00th=[ 3523], 00:20:27.911 | 30.00th=[ 3523], 40.00th=[ 3523], 50.00th=[ 3523], 60.00th=[ 3556], 00:20:27.911 | 70.00th=[ 3556], 80.00th=[ 3556], 90.00th=[ 3556], 95.00th=[ 3556], 00:20:27.911 | 99.00th=[ 3589], 99.50th=[ 3589], 99.90th=[ 4686], 99.95th=[ 5604], 00:20:27.911 | 99.99th=[ 6063] 00:20:27.911 bw ( KiB/s): min=70384, max=72496, per=100.00%, avg=71904.00, stdev=1019.24, samples=4 00:20:27.911 iops : min=17596, max=18124, avg=17976.00, stdev=254.81, samples=4 00:20:27.911 lat (msec) : 4=99.87%, 10=0.13% 00:20:27.911 cpu : usr=99.55%, sys=0.00%, ctx=15, majf=0, minf=3 00:20:27.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:27.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:27.911 issued rwts: total=35983,36016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:27.911 00:20:27.911 Run status group 0 (all jobs): 00:20:27.911 READ: bw=70.1MiB/s (73.5MB/s), 70.1MiB/s-70.1MiB/s (73.5MB/s-73.5MB/s), io=141MiB (147MB), run=2004-2004msec 00:20:27.911 WRITE: bw=70.2MiB/s (73.6MB/s), 70.2MiB/s-70.2MiB/s (73.6MB/s-73.6MB/s), io=141MiB (148MB), run=2004-2004msec 00:20:27.911 12:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:27.911 12:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:27.911 12:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:27.911 12:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:27.911 12:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:27.911 12:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:27.911 12:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:20:27.911 12:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:27.911 12:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:27.911 12:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:27.912 12:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:20:27.912 12:58:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:27.912 12:58:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:27.912 12:58:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:27.912 12:58:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:27.912 12:58:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:27.912 12:58:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:27.912 12:58:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:27.912 12:58:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:27.912 12:58:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:27.912 12:58:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:27.912 12:58:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:28.173 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:28.173 fio-3.35 00:20:28.173 Starting 1 thread 00:20:30.698 00:20:30.698 test: (groupid=0, jobs=1): err= 0: pid=39623: Wed Nov 27 12:58:56 2024 00:20:30.698 read: IOPS=14.5k, BW=227MiB/s (238MB/s)(446MiB/1966msec) 00:20:30.698 slat (nsec): min=2232, max=39601, avg=2534.86, stdev=911.43 00:20:30.698 clat (usec): min=524, max=8092, avg=1546.52, stdev=1214.24 00:20:30.698 lat (usec): min=526, max=8107, avg=1549.06, stdev=1214.59 00:20:30.698 clat percentiles (usec): 00:20:30.698 | 1.00th=[ 685], 5.00th=[ 775], 10.00th=[ 832], 20.00th=[ 906], 00:20:30.698 | 30.00th=[ 979], 40.00th=[ 1057], 50.00th=[ 1156], 60.00th=[ 1254], 00:20:30.698 | 70.00th=[ 1385], 80.00th=[ 1549], 90.00th=[ 3326], 95.00th=[ 4817], 00:20:30.698 | 99.00th=[ 6194], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 7504], 00:20:30.698 | 99.99th=[ 8094] 00:20:30.698 bw ( KiB/s): min=112160, max=116096, per=49.19%, avg=114232.00, stdev=1906.51, samples=4 00:20:30.698 iops : min= 7010, max= 7256, avg=7139.50, stdev=119.16, samples=4 00:20:30.698 write: IOPS=8218, BW=128MiB/s (135MB/s)(232MiB/1809msec); 0 zone resets 00:20:30.698 slat (usec): min=26, max=145, avg=28.23, stdev= 5.21 00:20:30.698 clat (usec): min=4134, max=20051, avg=12629.68, stdev=1792.42 00:20:30.698 lat (usec): min=4163, max=20080, avg=12657.90, stdev=1791.97 00:20:30.698 clat percentiles (usec): 00:20:30.698 | 1.00th=[ 6783], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11207], 00:20:30.698 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12649], 60.00th=[13042], 00:20:30.698 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14746], 95.00th=[15401], 00:20:30.698 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18744], 99.95th=[19006], 00:20:30.698 | 99.99th=[20055] 00:20:30.698 bw ( KiB/s): min=114560, max=122144, per=89.77%, avg=118056.00, stdev=3269.37, samples=4 00:20:30.698 iops : min= 7160, max= 7634, avg=7378.50, stdev=204.34, samples=4 00:20:30.698 lat (usec) : 750=2.36%, 1000=19.53% 00:20:30.698 lat (msec) : 2=36.15%, 4=2.10%, 10=7.44%, 20=32.40%, 50=0.01% 00:20:30.698 cpu : usr=96.61%, sys=1.65%, ctx=184, majf=0, minf=3 00:20:30.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:20:30.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:30.698 issued rwts: total=28533,14868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:30.698 00:20:30.698 Run status group 0 (all jobs): 00:20:30.698 READ: bw=227MiB/s (238MB/s), 227MiB/s-227MiB/s (238MB/s-238MB/s), io=446MiB (467MB), run=1966-1966msec 00:20:30.698 WRITE: bw=128MiB/s (135MB/s), 128MiB/s-128MiB/s (135MB/s-135MB/s), io=232MiB (244MB), run=1809-1809msec 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:30.698 rmmod nvme_rdma 00:20:30.698 rmmod nvme_fabrics 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 38283 ']' 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 38283 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 38283 ']' 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 38283 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.698 12:58:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 38283 00:20:30.698 12:58:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.698 12:58:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.698 12:58:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 38283' 00:20:30.698 killing process with pid 38283 00:20:30.698 12:58:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 38283 00:20:30.698 12:58:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 38283 00:20:30.956 12:58:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:30.956 12:58:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:30.956 00:20:30.956 real 0m16.462s 00:20:30.956 user 0m56.600s 00:20:30.956 sys 0m7.148s 00:20:30.956 12:58:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.956 12:58:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.956 ************************************ 00:20:30.956 END TEST nvmf_fio_host 00:20:30.956 ************************************ 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.213 ************************************ 00:20:31.213 START TEST nvmf_failover 00:20:31.213 ************************************ 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:20:31.213 * Looking for test storage... 00:20:31.213 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:31.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.213 --rc genhtml_branch_coverage=1 00:20:31.213 --rc genhtml_function_coverage=1 00:20:31.213 --rc genhtml_legend=1 00:20:31.213 --rc geninfo_all_blocks=1 00:20:31.213 --rc geninfo_unexecuted_blocks=1 00:20:31.213 00:20:31.213 ' 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:31.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.213 --rc genhtml_branch_coverage=1 00:20:31.213 --rc genhtml_function_coverage=1 00:20:31.213 --rc genhtml_legend=1 00:20:31.213 --rc geninfo_all_blocks=1 00:20:31.213 --rc geninfo_unexecuted_blocks=1 00:20:31.213 00:20:31.213 ' 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:31.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.213 --rc genhtml_branch_coverage=1 00:20:31.213 --rc genhtml_function_coverage=1 00:20:31.213 --rc genhtml_legend=1 00:20:31.213 --rc geninfo_all_blocks=1 00:20:31.213 --rc geninfo_unexecuted_blocks=1 00:20:31.213 00:20:31.213 ' 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:31.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.213 --rc genhtml_branch_coverage=1 00:20:31.213 --rc genhtml_function_coverage=1 00:20:31.213 --rc genhtml_legend=1 00:20:31.213 --rc geninfo_all_blocks=1 00:20:31.213 --rc geninfo_unexecuted_blocks=1 00:20:31.213 00:20:31.213 ' 00:20:31.213 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:31.472 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:20:31.472 12:58:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:39.577 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:39.578 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:39.578 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:39.578 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:39.578 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:39.578 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:39.579 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:39.579 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:39.579 altname enp217s0f0np0 00:20:39.579 altname ens818f0np0 00:20:39.579 inet 192.168.100.8/24 scope global mlx_0_0 00:20:39.579 valid_lft forever preferred_lft forever 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:39.579 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:39.579 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:39.579 altname enp217s0f1np1 00:20:39.579 altname ens818f1np1 00:20:39.579 inet 192.168.100.9/24 scope global mlx_0_1 00:20:39.579 valid_lft forever preferred_lft forever 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:39.579 192.168.100.9' 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:39.579 192.168.100.9' 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:39.579 192.168.100.9' 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=44066 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 44066 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 44066 ']' 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.579 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.580 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.580 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.580 12:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:39.580 [2024-11-27 12:59:05.649185] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:20:39.580 [2024-11-27 12:59:05.649235] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.580 [2024-11-27 12:59:05.735659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:39.580 [2024-11-27 12:59:05.775885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.580 [2024-11-27 12:59:05.775928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.580 [2024-11-27 12:59:05.775938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.580 [2024-11-27 12:59:05.775946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.580 [2024-11-27 12:59:05.775953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.580 [2024-11-27 12:59:05.777450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.580 [2024-11-27 12:59:05.777549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.580 [2024-11-27 12:59:05.777552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.143 12:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.143 12:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:20:40.143 12:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:40.143 12:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.143 12:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:40.143 12:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.143 12:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:40.400 [2024-11-27 12:59:06.722733] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1834570/0x1838a60) succeed. 00:20:40.400 [2024-11-27 12:59:06.731857] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1835b60/0x187a100) succeed. 00:20:40.657 12:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:40.913 Malloc0 00:20:40.913 12:59:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:40.913 12:59:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:41.169 12:59:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:41.426 [2024-11-27 12:59:07.628103] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:41.426 12:59:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:41.682 [2024-11-27 12:59:07.824458] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:41.682 12:59:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:20:41.682 [2024-11-27 12:59:08.021159] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:20:41.682 12:59:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=44378 00:20:41.682 12:59:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:41.682 12:59:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:41.682 12:59:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 44378 /var/tmp/bdevperf.sock 00:20:41.682 12:59:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 44378 ']' 00:20:41.682 12:59:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.682 12:59:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.682 12:59:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.682 12:59:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.682 12:59:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:42.617 12:59:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.617 12:59:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:20:42.617 12:59:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:42.874 NVMe0n1 00:20:42.874 12:59:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:43.132 00:20:43.132 12:59:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=44639 00:20:43.132 12:59:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:43.132 12:59:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:44.506 12:59:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:44.506 12:59:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:47.787 12:59:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:47.787 00:20:47.787 12:59:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:47.787 12:59:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:51.069 12:59:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:51.069 [2024-11-27 12:59:17.338750] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:51.069 12:59:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:52.003 12:59:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:20:52.261 12:59:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 44639 00:20:58.830 { 00:20:58.830 "results": [ 00:20:58.830 { 00:20:58.830 "job": "NVMe0n1", 00:20:58.830 "core_mask": "0x1", 00:20:58.830 "workload": "verify", 00:20:58.830 "status": "finished", 00:20:58.830 "verify_range": { 00:20:58.830 "start": 0, 00:20:58.830 "length": 16384 00:20:58.830 }, 00:20:58.830 "queue_depth": 128, 00:20:58.830 "io_size": 4096, 00:20:58.830 "runtime": 15.004926, 00:20:58.830 "iops": 14364.349414319004, 00:20:58.830 "mibps": 56.11073989968361, 00:20:58.830 "io_failed": 4476, 00:20:58.830 "io_timeout": 0, 00:20:58.830 "avg_latency_us": 8706.526442557679, 00:20:58.830 "min_latency_us": 348.9792, 00:20:58.830 "max_latency_us": 1020054.7328 00:20:58.830 } 00:20:58.830 ], 00:20:58.830 "core_count": 1 00:20:58.830 } 00:20:58.830 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 44378 00:20:58.830 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 44378 ']' 00:20:58.830 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 44378 00:20:58.830 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:58.830 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.830 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 44378 00:20:58.830 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:58.830 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:58.830 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 44378' 00:20:58.830 killing process with pid 44378 00:20:58.830 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 44378 00:20:58.830 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 44378 00:20:58.830 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:58.830 [2024-11-27 12:59:08.099966] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:20:58.830 [2024-11-27 12:59:08.100020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44378 ] 00:20:58.830 [2024-11-27 12:59:08.190653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.830 [2024-11-27 12:59:08.231685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.830 Running I/O for 15 seconds... 00:20:58.830 18048.00 IOPS, 70.50 MiB/s [2024-11-27T11:59:25.215Z] 9792.00 IOPS, 38.25 MiB/s [2024-11-27T11:59:25.215Z] [2024-11-27 12:59:11.667724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.667759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.667776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.667787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.667797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.667806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.667817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.667826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.667836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.667845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.667855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.667864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.667875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.667883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.667894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.667902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.667912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.667921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.667931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.667940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.667950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.667964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.667975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.667984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.667994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.668003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.668014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.668022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.668033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.668041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.668051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.668060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.668070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.668080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.830 [2024-11-27 12:59:11.668091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.830 [2024-11-27 12:59:11.668100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.831 [2024-11-27 12:59:11.668839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.831 [2024-11-27 12:59:11.668848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.668858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.668867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.668877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.668886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.668896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.668904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.668916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.668925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.668935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.668944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.668954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.668963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.668973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.668982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.668992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.832 [2024-11-27 12:59:11.669585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.832 [2024-11-27 12:59:11.669593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.669983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.669993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.670001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.670011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.670020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.670031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.670039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.670049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.670058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.670068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:11.670077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.670088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x181400 00:20:58.833 [2024-11-27 12:59:11.670097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.670109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x181400 00:20:58.833 [2024-11-27 12:59:11.670118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.670128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x181400 00:20:58.833 [2024-11-27 12:59:11.670137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.670148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x181400 00:20:58.833 [2024-11-27 12:59:11.670156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.670167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x181400 00:20:58.833 [2024-11-27 12:59:11.670175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.670185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x181400 00:20:58.833 [2024-11-27 12:59:11.670194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.672031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.833 [2024-11-27 12:59:11.672044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.833 [2024-11-27 12:59:11.672052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25648 len:8 PRP1 0x0 PRP2 0x0 00:20:58.833 [2024-11-27 12:59:11.672063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:11.672106] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:20:58.833 [2024-11-27 12:59:11.672117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:58.833 [2024-11-27 12:59:11.674888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:58.833 [2024-11-27 12:59:11.689853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:58.833 [2024-11-27 12:59:11.731704] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:58.833 11624.00 IOPS, 45.41 MiB/s [2024-11-27T11:59:25.218Z] 13260.25 IOPS, 51.80 MiB/s [2024-11-27T11:59:25.218Z] 12593.60 IOPS, 49.19 MiB/s [2024-11-27T11:59:25.218Z] [2024-11-27 12:59:15.152288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:15.152323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:15.152340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.833 [2024-11-27 12:59:15.152350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.833 [2024-11-27 12:59:15.152361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x182800 00:20:58.833 [2024-11-27 12:59:15.152378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x182800 00:20:58.834 [2024-11-27 12:59:15.152398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x182800 00:20:58.834 [2024-11-27 12:59:15.152417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182800 00:20:58.834 [2024-11-27 12:59:15.152436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x182800 00:20:58.834 [2024-11-27 12:59:15.152456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182800 00:20:58.834 [2024-11-27 12:59:15.152475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x182800 00:20:58.834 [2024-11-27 12:59:15.152495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x182800 00:20:58.834 [2024-11-27 12:59:15.152514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.834 [2024-11-27 12:59:15.152533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.834 [2024-11-27 12:59:15.152553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.834 [2024-11-27 12:59:15.152572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.834 [2024-11-27 12:59:15.152591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.834 [2024-11-27 12:59:15.152616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.834 [2024-11-27 12:59:15.152635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.834 [2024-11-27 12:59:15.152654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.834 [2024-11-27 12:59:15.152672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x182800 00:20:58.834 [2024-11-27 12:59:15.152692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x182800 00:20:58.834 [2024-11-27 12:59:15.152712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x182800 00:20:58.834 [2024-11-27 12:59:15.152731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x182800 00:20:58.834 [2024-11-27 12:59:15.152750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x182800 00:20:58.834 [2024-11-27 12:59:15.152770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x182800 00:20:58.834 [2024-11-27 12:59:15.152789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:123944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x182800 00:20:58.834 [2024-11-27 12:59:15.152810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x182800 00:20:58.834 [2024-11-27 12:59:15.152831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.834 [2024-11-27 12:59:15.152851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:124480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.834 [2024-11-27 12:59:15.152870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.834 [2024-11-27 12:59:15.152888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.834 [2024-11-27 12:59:15.152907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.834 [2024-11-27 12:59:15.152926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.834 [2024-11-27 12:59:15.152945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.834 [2024-11-27 12:59:15.152964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.834 [2024-11-27 12:59:15.152983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.152994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x182800 00:20:58.834 [2024-11-27 12:59:15.153003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.834 [2024-11-27 12:59:15.153013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:123992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.835 [2024-11-27 12:59:15.153311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.835 [2024-11-27 12:59:15.153330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.835 [2024-11-27 12:59:15.153349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.835 [2024-11-27 12:59:15.153367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.835 [2024-11-27 12:59:15.153386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.835 [2024-11-27 12:59:15.153405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.835 [2024-11-27 12:59:15.153424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:124152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x182800 00:20:58.835 [2024-11-27 12:59:15.153718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.835 [2024-11-27 12:59:15.153730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x182800 00:20:58.836 [2024-11-27 12:59:15.153739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.153749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.153758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.153768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x182800 00:20:58.836 [2024-11-27 12:59:15.153777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.153787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x182800 00:20:58.836 [2024-11-27 12:59:15.153796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.153807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:124232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x182800 00:20:58.836 [2024-11-27 12:59:15.153815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.153826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x182800 00:20:58.836 [2024-11-27 12:59:15.153835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.153845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x182800 00:20:58.836 [2024-11-27 12:59:15.153853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.153864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x182800 00:20:58.836 [2024-11-27 12:59:15.153872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.153883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x182800 00:20:58.836 [2024-11-27 12:59:15.153892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.153902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x182800 00:20:58.836 [2024-11-27 12:59:15.153912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.153922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.153930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.153940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.153949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.153961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.153970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.153980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.153988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.153998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x182800 00:20:58.836 [2024-11-27 12:59:15.154082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.836 [2024-11-27 12:59:15.154386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x182800 00:20:58.836 [2024-11-27 12:59:15.154405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x182800 00:20:58.836 [2024-11-27 12:59:15.154425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.836 [2024-11-27 12:59:15.154435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x182800 00:20:58.836 [2024-11-27 12:59:15.154444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x182800 00:20:58.837 [2024-11-27 12:59:15.154463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x182800 00:20:58.837 [2024-11-27 12:59:15.154482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x182800 00:20:58.837 [2024-11-27 12:59:15.154503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x182800 00:20:58.837 [2024-11-27 12:59:15.154522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.837 [2024-11-27 12:59:15.154541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.837 [2024-11-27 12:59:15.154560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.837 [2024-11-27 12:59:15.154579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.837 [2024-11-27 12:59:15.154597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.837 [2024-11-27 12:59:15.154619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.837 [2024-11-27 12:59:15.154638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.837 [2024-11-27 12:59:15.154660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.837 [2024-11-27 12:59:15.154679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x182800 00:20:58.837 [2024-11-27 12:59:15.154698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x182800 00:20:58.837 [2024-11-27 12:59:15.154717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x182800 00:20:58.837 [2024-11-27 12:59:15.154736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x182800 00:20:58.837 [2024-11-27 12:59:15.154756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.154766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x182800 00:20:58.837 [2024-11-27 12:59:15.154775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.156710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.837 [2024-11-27 12:59:15.156723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.837 [2024-11-27 12:59:15.156731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124384 len:8 PRP1 0x0 PRP2 0x0 00:20:58.837 [2024-11-27 12:59:15.156742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:15.156783] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:20:58.837 [2024-11-27 12:59:15.156795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:58.837 [2024-11-27 12:59:15.159576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:58.837 [2024-11-27 12:59:15.174338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:20:58.837 [2024-11-27 12:59:15.209791] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:20:58.837 11630.17 IOPS, 45.43 MiB/s [2024-11-27T11:59:25.222Z] 12583.57 IOPS, 49.15 MiB/s [2024-11-27T11:59:25.222Z] 13303.25 IOPS, 51.97 MiB/s [2024-11-27T11:59:25.222Z] 13748.89 IOPS, 53.71 MiB/s [2024-11-27T11:59:25.222Z] [2024-11-27 12:59:19.550849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.837 [2024-11-27 12:59:19.550893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:19.550912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.837 [2024-11-27 12:59:19.550921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:19.550932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.837 [2024-11-27 12:59:19.550941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:19.550951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.837 [2024-11-27 12:59:19.550960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:19.550970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.837 [2024-11-27 12:59:19.550979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:19.550989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.837 [2024-11-27 12:59:19.550998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:19.551008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.837 [2024-11-27 12:59:19.551017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:19.551028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x181400 00:20:58.837 [2024-11-27 12:59:19.551037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:19.551047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x181400 00:20:58.837 [2024-11-27 12:59:19.551056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:19.551066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x181400 00:20:58.837 [2024-11-27 12:59:19.551075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:19.551086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x181400 00:20:58.837 [2024-11-27 12:59:19.551094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:19.551105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x181400 00:20:58.837 [2024-11-27 12:59:19.551114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:19.551124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x181400 00:20:58.837 [2024-11-27 12:59:19.551135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:19.551145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x181400 00:20:58.837 [2024-11-27 12:59:19.551154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:19.551165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x181400 00:20:58.837 [2024-11-27 12:59:19.551173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.837 [2024-11-27 12:59:19.551184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x181400 00:20:58.838 [2024-11-27 12:59:19.551680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.838 [2024-11-27 12:59:19.551902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.838 [2024-11-27 12:59:19.551910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.551921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.839 [2024-11-27 12:59:19.551929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.551941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.839 [2024-11-27 12:59:19.551949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.551959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.839 [2024-11-27 12:59:19.551968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.551978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.839 [2024-11-27 12:59:19.551987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.551997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.839 [2024-11-27 12:59:19.552320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.839 [2024-11-27 12:59:19.552339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.839 [2024-11-27 12:59:19.552358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.839 [2024-11-27 12:59:19.552377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.839 [2024-11-27 12:59:19.552395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.839 [2024-11-27 12:59:19.552414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.839 [2024-11-27 12:59:19.552433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.839 [2024-11-27 12:59:19.552452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x181400 00:20:58.839 [2024-11-27 12:59:19.552531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.839 [2024-11-27 12:59:19.552541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.552983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.552994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.553002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.553013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.553022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.553032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.553041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.553051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.553060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.553071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.553079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.553089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.840 [2024-11-27 12:59:19.553098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.553108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.840 [2024-11-27 12:59:19.553117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.553128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.840 [2024-11-27 12:59:19.553137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.553147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.840 [2024-11-27 12:59:19.553156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.553166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.553176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.553186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.553195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.553206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.553214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.553225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.553234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.840 [2024-11-27 12:59:19.553245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x181400 00:20:58.840 [2024-11-27 12:59:19.553253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.841 [2024-11-27 12:59:19.553264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x181400 00:20:58.841 [2024-11-27 12:59:19.553272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.841 [2024-11-27 12:59:19.553283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x181400 00:20:58.841 [2024-11-27 12:59:19.553292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.841 [2024-11-27 12:59:19.553302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181400 00:20:58.841 [2024-11-27 12:59:19.553311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.841 [2024-11-27 12:59:19.553321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181400 00:20:58.841 [2024-11-27 12:59:19.553330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.841 [2024-11-27 12:59:19.553340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x181400 00:20:58.841 [2024-11-27 12:59:19.553349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:59553000 sqhd:7210 p:0 m:0 dnr:0 00:20:58.841 [2024-11-27 12:59:19.555243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.841 [2024-11-27 12:59:19.555256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.841 [2024-11-27 12:59:19.555265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96896 len:8 PRP1 0x0 PRP2 0x0 00:20:58.841 [2024-11-27 12:59:19.555275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.841 [2024-11-27 12:59:19.555317] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:20:58.841 [2024-11-27 12:59:19.555331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:58.841 [2024-11-27 12:59:19.558118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:58.841 [2024-11-27 12:59:19.572571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:20:58.841 12374.00 IOPS, 48.34 MiB/s [2024-11-27T11:59:25.226Z] [2024-11-27 12:59:19.612392] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:20:58.841 12886.91 IOPS, 50.34 MiB/s [2024-11-27T11:59:25.226Z] 13349.42 IOPS, 52.15 MiB/s [2024-11-27T11:59:25.226Z] 13740.38 IOPS, 53.67 MiB/s [2024-11-27T11:59:25.226Z] 14075.50 IOPS, 54.98 MiB/s [2024-11-27T11:59:25.226Z] 14365.80 IOPS, 56.12 MiB/s 00:20:58.841 Latency(us) 00:20:58.841 [2024-11-27T11:59:25.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.841 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:58.841 Verification LBA range: start 0x0 length 0x4000 00:20:58.841 NVMe0n1 : 15.00 14364.35 56.11 298.30 0.00 8706.53 348.98 1020054.73 00:20:58.841 [2024-11-27T11:59:25.226Z] =================================================================================================================== 00:20:58.841 [2024-11-27T11:59:25.226Z] Total : 14364.35 56.11 298.30 0.00 8706.53 348.98 1020054.73 00:20:58.841 Received shutdown signal, test time was about 15.000000 seconds 00:20:58.841 00:20:58.841 Latency(us) 00:20:58.841 [2024-11-27T11:59:25.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.841 [2024-11-27T11:59:25.226Z] =================================================================================================================== 00:20:58.841 [2024-11-27T11:59:25.226Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:58.841 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:58.841 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:58.841 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:58.841 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=47292 00:20:58.841 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:58.841 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 47292 /var/tmp/bdevperf.sock 00:20:58.841 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 47292 ']' 00:20:58.841 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.841 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.841 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.841 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.841 12:59:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:59.407 12:59:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.407 12:59:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:20:59.407 12:59:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:59.665 [2024-11-27 12:59:25.956275] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:59.665 12:59:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:20:59.924 [2024-11-27 12:59:26.156932] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:20:59.924 12:59:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:00.183 NVMe0n1 00:21:00.183 12:59:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:00.441 00:21:00.441 12:59:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:00.699 00:21:00.699 12:59:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:00.699 12:59:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:00.957 12:59:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:01.215 12:59:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:04.504 12:59:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:04.504 12:59:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:04.504 12:59:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=48264 00:21:04.504 12:59:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:04.504 12:59:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 48264 00:21:05.434 { 00:21:05.434 "results": [ 00:21:05.434 { 00:21:05.434 "job": "NVMe0n1", 00:21:05.434 "core_mask": "0x1", 00:21:05.434 "workload": "verify", 00:21:05.434 "status": "finished", 00:21:05.434 "verify_range": { 00:21:05.434 "start": 0, 00:21:05.434 "length": 16384 00:21:05.434 }, 00:21:05.434 "queue_depth": 128, 00:21:05.434 "io_size": 4096, 00:21:05.434 "runtime": 1.005519, 00:21:05.434 "iops": 18085.187848265425, 00:21:05.434 "mibps": 70.64526503228682, 00:21:05.434 "io_failed": 0, 00:21:05.434 "io_timeout": 0, 00:21:05.434 "avg_latency_us": 7037.63512028595, 00:21:05.434 "min_latency_us": 370.2784, 00:21:05.434 "max_latency_us": 13369.344 00:21:05.434 } 00:21:05.434 ], 00:21:05.434 "core_count": 1 00:21:05.434 } 00:21:05.434 12:59:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:05.434 [2024-11-27 12:59:24.937184] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:21:05.434 [2024-11-27 12:59:24.937242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47292 ] 00:21:05.434 [2024-11-27 12:59:25.027185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.434 [2024-11-27 12:59:25.062828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.434 [2024-11-27 12:59:27.351557] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:21:05.434 [2024-11-27 12:59:27.352099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:21:05.434 [2024-11-27 12:59:27.352130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:21:05.434 [2024-11-27 12:59:27.376701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:21:05.434 [2024-11-27 12:59:27.393986] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:21:05.434 Running I/O for 1 seconds... 00:21:05.434 18048.00 IOPS, 70.50 MiB/s 00:21:05.434 Latency(us) 00:21:05.434 [2024-11-27T11:59:31.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.434 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:05.434 Verification LBA range: start 0x0 length 0x4000 00:21:05.434 NVMe0n1 : 1.01 18085.19 70.65 0.00 0.00 7037.64 370.28 13369.34 00:21:05.434 [2024-11-27T11:59:31.819Z] =================================================================================================================== 00:21:05.434 [2024-11-27T11:59:31.819Z] Total : 18085.19 70.65 0.00 0.00 7037.64 370.28 13369.34 00:21:05.434 12:59:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:05.434 12:59:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:05.690 12:59:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:05.946 12:59:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:05.946 12:59:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:06.204 12:59:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:06.204 12:59:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:09.487 12:59:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:09.487 12:59:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:09.487 12:59:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 47292 00:21:09.487 12:59:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 47292 ']' 00:21:09.487 12:59:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 47292 00:21:09.487 12:59:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:21:09.487 12:59:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.487 12:59:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 47292 00:21:09.487 12:59:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.487 12:59:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.487 12:59:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 47292' 00:21:09.487 killing process with pid 47292 00:21:09.487 12:59:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 47292 00:21:09.487 12:59:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 47292 00:21:09.746 12:59:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:09.746 12:59:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:10.005 rmmod nvme_rdma 00:21:10.005 rmmod nvme_fabrics 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 44066 ']' 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 44066 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 44066 ']' 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 44066 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 44066 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 44066' 00:21:10.005 killing process with pid 44066 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 44066 00:21:10.005 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 44066 00:21:10.263 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:10.263 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:10.263 00:21:10.263 real 0m39.135s 00:21:10.263 user 2m6.581s 00:21:10.263 sys 0m8.535s 00:21:10.263 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.263 12:59:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:10.263 ************************************ 00:21:10.263 END TEST nvmf_failover 00:21:10.263 ************************************ 00:21:10.263 12:59:36 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:21:10.263 12:59:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:10.263 12:59:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.263 12:59:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.263 ************************************ 00:21:10.263 START TEST nvmf_host_discovery 00:21:10.263 ************************************ 00:21:10.263 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:21:10.523 * Looking for test storage... 00:21:10.523 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:10.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.523 --rc genhtml_branch_coverage=1 00:21:10.523 --rc genhtml_function_coverage=1 00:21:10.523 --rc genhtml_legend=1 00:21:10.523 --rc geninfo_all_blocks=1 00:21:10.523 --rc geninfo_unexecuted_blocks=1 00:21:10.523 00:21:10.523 ' 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:10.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.523 --rc genhtml_branch_coverage=1 00:21:10.523 --rc genhtml_function_coverage=1 00:21:10.523 --rc genhtml_legend=1 00:21:10.523 --rc geninfo_all_blocks=1 00:21:10.523 --rc geninfo_unexecuted_blocks=1 00:21:10.523 00:21:10.523 ' 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:10.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.523 --rc genhtml_branch_coverage=1 00:21:10.523 --rc genhtml_function_coverage=1 00:21:10.523 --rc genhtml_legend=1 00:21:10.523 --rc geninfo_all_blocks=1 00:21:10.523 --rc geninfo_unexecuted_blocks=1 00:21:10.523 00:21:10.523 ' 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:10.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.523 --rc genhtml_branch_coverage=1 00:21:10.523 --rc genhtml_function_coverage=1 00:21:10.523 --rc genhtml_legend=1 00:21:10.523 --rc geninfo_all_blocks=1 00:21:10.523 --rc geninfo_unexecuted_blocks=1 00:21:10.523 00:21:10.523 ' 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.523 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.523 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.524 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.524 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.524 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:21:10.524 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:21:10.524 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:21:10.524 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:21:10.524 00:21:10.524 real 0m0.229s 00:21:10.524 user 0m0.132s 00:21:10.524 sys 0m0.115s 00:21:10.524 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.524 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.524 ************************************ 00:21:10.524 END TEST nvmf_host_discovery 00:21:10.524 ************************************ 00:21:10.524 12:59:36 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:21:10.524 12:59:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:10.524 12:59:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.524 12:59:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.524 ************************************ 00:21:10.524 START TEST nvmf_host_multipath_status 00:21:10.524 ************************************ 00:21:10.524 12:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:21:10.783 * Looking for test storage... 00:21:10.783 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:10.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.783 --rc genhtml_branch_coverage=1 00:21:10.783 --rc genhtml_function_coverage=1 00:21:10.783 --rc genhtml_legend=1 00:21:10.783 --rc geninfo_all_blocks=1 00:21:10.783 --rc geninfo_unexecuted_blocks=1 00:21:10.783 00:21:10.783 ' 00:21:10.783 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:10.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.783 --rc genhtml_branch_coverage=1 00:21:10.783 --rc genhtml_function_coverage=1 00:21:10.783 --rc genhtml_legend=1 00:21:10.783 --rc geninfo_all_blocks=1 00:21:10.783 --rc geninfo_unexecuted_blocks=1 00:21:10.783 00:21:10.783 ' 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:10.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.784 --rc genhtml_branch_coverage=1 00:21:10.784 --rc genhtml_function_coverage=1 00:21:10.784 --rc genhtml_legend=1 00:21:10.784 --rc geninfo_all_blocks=1 00:21:10.784 --rc geninfo_unexecuted_blocks=1 00:21:10.784 00:21:10.784 ' 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:10.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.784 --rc genhtml_branch_coverage=1 00:21:10.784 --rc genhtml_function_coverage=1 00:21:10.784 --rc genhtml_legend=1 00:21:10.784 --rc geninfo_all_blocks=1 00:21:10.784 --rc geninfo_unexecuted_blocks=1 00:21:10.784 00:21:10.784 ' 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.784 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:21:10.784 12:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:20.765 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:20.765 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:20.765 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.765 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:20.766 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:20.766 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:20.766 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:20.766 altname enp217s0f0np0 00:21:20.766 altname ens818f0np0 00:21:20.766 inet 192.168.100.8/24 scope global mlx_0_0 00:21:20.766 valid_lft forever preferred_lft forever 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:20.766 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:20.766 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:20.766 altname enp217s0f1np1 00:21:20.766 altname ens818f1np1 00:21:20.766 inet 192.168.100.9/24 scope global mlx_0_1 00:21:20.766 valid_lft forever preferred_lft forever 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:20.766 192.168.100.9' 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:20.766 192.168.100.9' 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:20.766 192.168.100.9' 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:21:20.766 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=53407 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 53407 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 53407 ']' 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.767 12:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:20.767 [2024-11-27 12:59:45.776220] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:21:20.767 [2024-11-27 12:59:45.776277] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.767 [2024-11-27 12:59:45.864624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:20.767 [2024-11-27 12:59:45.902541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.767 [2024-11-27 12:59:45.902578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.767 [2024-11-27 12:59:45.902587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.767 [2024-11-27 12:59:45.902595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.767 [2024-11-27 12:59:45.902602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.767 [2024-11-27 12:59:45.903896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.767 [2024-11-27 12:59:45.903898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.767 12:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.767 12:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:21:20.767 12:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:20.767 12:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:20.767 12:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:20.767 12:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.767 12:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=53407 00:21:20.767 12:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:20.767 [2024-11-27 12:59:46.840157] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1552730/0x1556c20) succeed. 00:21:20.767 [2024-11-27 12:59:46.849144] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1553c80/0x15982c0) succeed. 00:21:20.767 12:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:20.767 Malloc0 00:21:20.767 12:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:21.025 12:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:21.283 12:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:21.283 [2024-11-27 12:59:47.652651] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:21.542 12:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:21.542 [2024-11-27 12:59:47.844997] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:21.542 12:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=53780 00:21:21.542 12:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:21.542 12:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 53780 /var/tmp/bdevperf.sock 00:21:21.542 12:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 53780 ']' 00:21:21.542 12:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.542 12:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.542 12:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.542 12:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.542 12:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:21.542 12:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:21.802 12:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.802 12:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:21:21.802 12:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:22.061 12:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:22.319 Nvme0n1 00:21:22.319 12:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:22.578 Nvme0n1 00:21:22.578 12:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:22.578 12:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:24.632 12:59:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:24.632 12:59:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:21:24.890 12:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:24.890 12:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:26.266 12:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:26.266 12:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:26.266 12:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:26.267 12:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:26.267 12:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:26.267 12:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:26.267 12:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:26.267 12:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:26.267 12:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:26.267 12:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:26.526 12:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:26.526 12:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:26.526 12:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:26.526 12:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:26.526 12:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:26.526 12:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:26.784 12:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:26.784 12:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:26.785 12:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:26.785 12:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:27.043 12:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.043 12:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:27.043 12:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:27.043 12:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.043 12:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.043 12:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:27.043 12:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:27.302 12:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:27.560 12:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:28.494 12:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:28.494 12:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:28.494 12:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:28.494 12:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:28.752 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:28.752 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:28.752 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:28.752 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:29.011 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.011 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:29.011 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.011 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:29.011 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.011 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:29.011 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.269 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:29.269 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.269 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:29.269 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:29.269 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.527 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.527 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:29.527 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.527 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:29.786 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.786 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:29.786 12:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:30.044 12:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:21:30.044 12:59:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:31.419 12:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:31.420 12:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:31.420 12:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.420 12:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:31.420 12:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.420 12:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:31.420 12:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.420 12:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:31.420 12:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:31.420 12:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:31.420 12:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:31.420 12:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.678 12:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.678 12:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:31.678 12:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:31.678 12:59:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.937 12:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.937 12:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:31.937 12:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:31.937 12:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.196 12:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.196 12:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:32.196 12:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.196 12:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:32.196 12:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.196 12:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:32.196 12:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:32.455 12:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:21:32.713 12:59:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:33.647 12:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:33.647 12:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:33.647 12:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.647 12:59:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:33.905 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.905 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:33.905 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.905 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:34.164 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:34.164 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:34.164 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.164 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:34.164 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.164 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:34.164 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.164 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:34.422 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.422 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:34.422 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.422 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:34.679 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.679 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:34.679 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.679 13:00:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:34.938 13:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:34.938 13:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:34.938 13:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:21:34.938 13:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:21:35.197 13:00:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:36.141 13:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:36.141 13:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:36.141 13:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.141 13:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:36.399 13:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:36.399 13:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:36.399 13:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.400 13:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:36.658 13:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:36.658 13:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:36.658 13:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.658 13:00:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:36.917 13:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.917 13:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:36.917 13:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.917 13:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:36.917 13:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.917 13:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:36.917 13:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.917 13:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:37.175 13:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:37.175 13:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:37.175 13:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.175 13:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:37.432 13:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:37.432 13:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:37.432 13:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:21:37.690 13:00:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:37.690 13:00:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:39.063 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:39.063 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:39.063 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.063 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:39.063 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:39.063 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:39.063 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.063 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:39.063 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.063 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:39.063 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.063 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:39.322 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.322 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:39.322 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.322 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:39.580 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.580 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:39.580 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:39.580 13:00:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.839 13:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:39.839 13:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:39.839 13:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.839 13:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:39.839 13:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.839 13:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:40.098 13:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:40.098 13:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:21:40.356 13:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:40.614 13:00:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:41.549 13:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:41.549 13:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:41.549 13:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:41.549 13:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:41.807 13:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:41.807 13:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:41.807 13:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:41.807 13:00:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:42.066 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.066 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:42.066 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.066 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:42.066 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.066 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:42.066 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.066 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:42.325 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.325 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:42.325 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.325 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:42.583 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.583 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:42.583 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:42.583 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.842 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.842 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:42.842 13:00:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:42.842 13:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:43.100 13:00:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:44.034 13:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:44.034 13:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:44.034 13:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.034 13:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:44.292 13:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:44.292 13:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:44.292 13:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:44.292 13:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.551 13:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:44.551 13:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:44.551 13:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.551 13:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:44.810 13:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:44.810 13:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:44.810 13:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.810 13:00:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:44.810 13:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:44.810 13:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:44.810 13:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.810 13:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:45.068 13:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.068 13:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:45.068 13:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.068 13:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:45.326 13:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.326 13:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:45.326 13:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:45.585 13:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:21:45.585 13:00:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:46.961 13:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:46.961 13:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:46.961 13:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.961 13:00:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:46.961 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.961 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:46.961 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.961 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:46.961 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.961 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:46.961 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.961 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:47.219 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:47.219 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:47.219 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:47.219 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:47.477 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:47.477 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:47.477 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:47.477 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:47.735 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:47.735 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:47.735 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:47.735 13:00:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:47.735 13:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:47.735 13:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:47.735 13:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:47.993 13:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:21:48.251 13:00:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:49.186 13:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:49.186 13:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:49.186 13:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.186 13:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:49.444 13:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.444 13:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:49.444 13:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.444 13:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:49.718 13:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:49.718 13:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:49.718 13:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.718 13:00:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:49.718 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.718 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:49.718 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.718 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:49.976 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.976 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:49.976 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.976 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:50.234 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:50.234 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:50.234 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:50.234 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:50.492 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:50.493 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 53780 00:21:50.493 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 53780 ']' 00:21:50.493 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 53780 00:21:50.493 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:21:50.493 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.493 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 53780 00:21:50.493 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:50.493 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:50.493 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 53780' 00:21:50.493 killing process with pid 53780 00:21:50.493 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 53780 00:21:50.493 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 53780 00:21:50.493 { 00:21:50.493 "results": [ 00:21:50.493 { 00:21:50.493 "job": "Nvme0n1", 00:21:50.493 "core_mask": "0x4", 00:21:50.493 "workload": "verify", 00:21:50.493 "status": "terminated", 00:21:50.493 "verify_range": { 00:21:50.493 "start": 0, 00:21:50.493 "length": 16384 00:21:50.493 }, 00:21:50.493 "queue_depth": 128, 00:21:50.493 "io_size": 4096, 00:21:50.493 "runtime": 27.688796, 00:21:50.493 "iops": 15970.79194053797, 00:21:50.493 "mibps": 62.38590601772645, 00:21:50.493 "io_failed": 0, 00:21:50.493 "io_timeout": 0, 00:21:50.493 "avg_latency_us": 7995.242410210488, 00:21:50.493 "min_latency_us": 83.968, 00:21:50.493 "max_latency_us": 3019898.88 00:21:50.493 } 00:21:50.493 ], 00:21:50.493 "core_count": 1 00:21:50.493 } 00:21:50.757 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 53780 00:21:50.757 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:50.757 [2024-11-27 12:59:47.910117] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:21:50.757 [2024-11-27 12:59:47.910168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53780 ] 00:21:50.757 [2024-11-27 12:59:47.995150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.757 [2024-11-27 12:59:48.034482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.757 Running I/O for 90 seconds... 00:21:50.757 18432.00 IOPS, 72.00 MiB/s [2024-11-27T12:00:17.142Z] 18467.50 IOPS, 72.14 MiB/s [2024-11-27T12:00:17.142Z] 18517.00 IOPS, 72.33 MiB/s [2024-11-27T12:00:17.142Z] 18528.00 IOPS, 72.38 MiB/s [2024-11-27T12:00:17.142Z] 18560.00 IOPS, 72.50 MiB/s [2024-11-27T12:00:17.142Z] 18602.67 IOPS, 72.67 MiB/s [2024-11-27T12:00:17.142Z] 18619.14 IOPS, 72.73 MiB/s [2024-11-27T12:00:17.142Z] 18636.62 IOPS, 72.80 MiB/s [2024-11-27T12:00:17.142Z] 18638.56 IOPS, 72.81 MiB/s [2024-11-27T12:00:17.142Z] 18637.20 IOPS, 72.80 MiB/s [2024-11-27T12:00:17.142Z] 18647.45 IOPS, 72.84 MiB/s [2024-11-27T12:00:17.142Z] 18633.17 IOPS, 72.79 MiB/s [2024-11-27T12:00:17.142Z] [2024-11-27 13:00:01.270627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.757 [2024-11-27 13:00:01.270666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:50.757 [2024-11-27 13:00:01.270702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.757 [2024-11-27 13:00:01.270713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:50.757 [2024-11-27 13:00:01.270726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.757 [2024-11-27 13:00:01.270736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:50.757 [2024-11-27 13:00:01.270749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.270758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.270770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.270779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.270791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.270800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.270812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.270821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.270832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.270842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.270853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.270862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.270874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.270889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.270901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.270910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.270922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.270931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.270943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.270952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.270963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.270973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.270984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.270994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.271015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:126680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.271036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.271058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.271079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.271100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.271121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.271143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.271164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.271186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.271206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.758 [2024-11-27 13:00:01.271229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x182a00 00:21:50.758 [2024-11-27 13:00:01.271250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x182a00 00:21:50.758 [2024-11-27 13:00:01.271272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x182a00 00:21:50.758 [2024-11-27 13:00:01.271293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x182a00 00:21:50.758 [2024-11-27 13:00:01.271315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x182a00 00:21:50.758 [2024-11-27 13:00:01.271336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x182a00 00:21:50.758 [2024-11-27 13:00:01.271358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x182a00 00:21:50.758 [2024-11-27 13:00:01.271380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x182a00 00:21:50.758 [2024-11-27 13:00:01.271405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x182a00 00:21:50.758 [2024-11-27 13:00:01.271427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x182a00 00:21:50.758 [2024-11-27 13:00:01.271450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:50.758 [2024-11-27 13:00:01.271462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.759 [2024-11-27 13:00:01.271952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.759 [2024-11-27 13:00:01.271973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.271985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.271994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.272006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.272016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.272028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.272037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.272049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.272058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.272070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.272079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.272093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.272103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.272114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.272124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.272137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.272146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.272158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.272167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.272179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.272188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.272200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.272210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.272222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.272231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:50.759 [2024-11-27 13:00:01.272243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x182a00 00:21:50.759 [2024-11-27 13:00:01.272253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x182a00 00:21:50.760 [2024-11-27 13:00:01.272621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.760 [2024-11-27 13:00:01.272643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.760 [2024-11-27 13:00:01.272664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.760 [2024-11-27 13:00:01.272685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.760 [2024-11-27 13:00:01.272706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.760 [2024-11-27 13:00:01.272727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.760 [2024-11-27 13:00:01.272749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.272761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.760 [2024-11-27 13:00:01.272771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.273058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.760 [2024-11-27 13:00:01.273069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.273087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.760 [2024-11-27 13:00:01.273097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.273464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.760 [2024-11-27 13:00:01.273474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:50.760 [2024-11-27 13:00:01.273492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.760 [2024-11-27 13:00:01.273501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:01.273972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.273989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x182a00 00:21:50.761 [2024-11-27 13:00:01.273998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.274015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x182a00 00:21:50.761 [2024-11-27 13:00:01.274024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.274041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182a00 00:21:50.761 [2024-11-27 13:00:01.274051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.274067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x182a00 00:21:50.761 [2024-11-27 13:00:01.274076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.274094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182a00 00:21:50.761 [2024-11-27 13:00:01.274103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.274119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x182a00 00:21:50.761 [2024-11-27 13:00:01.274129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.274145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x182a00 00:21:50.761 [2024-11-27 13:00:01.274156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.274172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x182a00 00:21:50.761 [2024-11-27 13:00:01.274181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:01.274198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x182a00 00:21:50.761 [2024-11-27 13:00:01.274207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:50.761 17595.38 IOPS, 68.73 MiB/s [2024-11-27T12:00:17.146Z] 16338.57 IOPS, 63.82 MiB/s [2024-11-27T12:00:17.146Z] 15249.33 IOPS, 59.57 MiB/s [2024-11-27T12:00:17.146Z] 15142.44 IOPS, 59.15 MiB/s [2024-11-27T12:00:17.146Z] 15352.06 IOPS, 59.97 MiB/s [2024-11-27T12:00:17.146Z] 15455.39 IOPS, 60.37 MiB/s [2024-11-27T12:00:17.146Z] 15441.95 IOPS, 60.32 MiB/s [2024-11-27T12:00:17.146Z] 15425.60 IOPS, 60.26 MiB/s [2024-11-27T12:00:17.146Z] 15561.48 IOPS, 60.79 MiB/s [2024-11-27T12:00:17.146Z] 15701.95 IOPS, 61.34 MiB/s [2024-11-27T12:00:17.146Z] 15819.13 IOPS, 61.79 MiB/s [2024-11-27T12:00:17.146Z] 15786.33 IOPS, 61.67 MiB/s [2024-11-27T12:00:17.146Z] 15756.60 IOPS, 61.55 MiB/s [2024-11-27T12:00:17.146Z] [2024-11-27 13:00:14.458171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:14.458205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:14.458223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x182a00 00:21:50.761 [2024-11-27 13:00:14.458234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:14.458783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:14.458796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:14.458808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:14.458818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:14.458829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:14.458838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:14.458851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x182a00 00:21:50.761 [2024-11-27 13:00:14.458860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:14.458871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:14.458880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:50.761 [2024-11-27 13:00:14.458891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.761 [2024-11-27 13:00:14.458900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.458919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.762 [2024-11-27 13:00:14.458928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.458939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.458948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.458960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.762 [2024-11-27 13:00:14.458968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.458980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.458989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.459010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.762 [2024-11-27 13:00:14.459030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.762 [2024-11-27 13:00:14.459050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.762 [2024-11-27 13:00:14.459070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.459090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.762 [2024-11-27 13:00:14.459110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.459131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.459153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.762 [2024-11-27 13:00:14.459173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.459194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.459214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.762 [2024-11-27 13:00:14.459235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.762 [2024-11-27 13:00:14.459255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.762 [2024-11-27 13:00:14.459275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.459295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.459317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.459337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.459358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.459378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.459400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.762 [2024-11-27 13:00:14.459420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.762 [2024-11-27 13:00:14.459440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.459505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.459526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x182a00 00:21:50.762 [2024-11-27 13:00:14.459547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:50.762 [2024-11-27 13:00:14.459558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.762 [2024-11-27 13:00:14.459566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.459587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.459611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.459632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.459652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.459673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.459695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.459715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.459735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.459756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.459775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.459796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.459816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.459835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.459855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.459875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.459895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.459915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.459937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.459957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.459977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.459989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.459998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.460009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.460018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.460029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.460038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.460049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.460058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.460069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.460078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.460089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.460098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.460112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.460121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.461640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.461656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.461670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.461682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.461694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.461703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.461996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.462007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.462019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.462028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.462039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.462048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.462059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.462068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.462080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.462089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.462101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:58416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x182a00 00:21:50.763 [2024-11-27 13:00:14.462109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.462121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.763 [2024-11-27 13:00:14.462130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:50.763 [2024-11-27 13:00:14.462141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.462170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.462233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.462295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.462315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.462336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.462356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.462397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.462419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.462461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.462482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.462523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.462563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.462907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.462952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.462972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.462984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.462993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.463004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.463014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.463026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.463035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.463046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.463055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.463066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.463076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.463087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x182a00 00:21:50.764 [2024-11-27 13:00:14.463096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:50.764 [2024-11-27 13:00:14.463108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.764 [2024-11-27 13:00:14.463117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:58072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.463138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.463159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.765 [2024-11-27 13:00:14.463179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.463199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.765 [2024-11-27 13:00:14.463219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.463239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.765 [2024-11-27 13:00:14.463259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.463281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.765 [2024-11-27 13:00:14.463301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.765 [2024-11-27 13:00:14.463321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.765 [2024-11-27 13:00:14.463341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.463361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.463383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.463404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.765 [2024-11-27 13:00:14.463424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.463444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.463466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.463477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.463486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.464882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.765 [2024-11-27 13:00:14.464897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.464911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.464920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.464932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.464941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.464953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.765 [2024-11-27 13:00:14.464962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.464973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.765 [2024-11-27 13:00:14.464982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.464995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.465006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.465464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.765 [2024-11-27 13:00:14.465475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.465487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.465496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.465507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.465516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.465528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.465537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.465549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.765 [2024-11-27 13:00:14.465557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.465569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.465577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.465589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.465598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.465613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.465622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.465634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.765 [2024-11-27 13:00:14.465642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.465654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.765 [2024-11-27 13:00:14.465663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.465675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.465684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.465696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x182a00 00:21:50.765 [2024-11-27 13:00:14.465707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:50.765 [2024-11-27 13:00:14.465718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.765 [2024-11-27 13:00:14.465727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.465739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.465748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.465760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.465769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.465780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.465789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.465800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.465809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.465821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.465830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.465842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.465851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.465862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.465871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.465882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.465891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.465902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.465911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.465923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.465932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.465945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.465954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.465965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.465975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.465986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.465994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.466006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.466015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.466026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.466035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.466046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.466055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.466066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.466075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.466086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.466095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.466106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.466115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.466127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.466136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.466147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.466157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.466168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.466178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.466189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.466198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.466209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.466218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.466229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.466238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.466250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.466259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.466270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.474615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.474629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.474639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.474652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.474662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.474673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.474682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.474693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.474702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.474713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.474722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.474734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.474743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.474754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.474767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.474779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.474787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.474800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x182a00 00:21:50.766 [2024-11-27 13:00:14.474809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:50.766 [2024-11-27 13:00:14.474820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.766 [2024-11-27 13:00:14.474829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.474840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.474849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.474861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.474869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.474880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.474889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.474900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.474909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.474921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x182a00 00:21:50.767 [2024-11-27 13:00:14.474929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.474941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.474949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.474961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x182a00 00:21:50.767 [2024-11-27 13:00:14.474970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.474981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182a00 00:21:50.767 [2024-11-27 13:00:14.474990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.475004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.475015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.475027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x182a00 00:21:50.767 [2024-11-27 13:00:14.475036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.476694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182a00 00:21:50.767 [2024-11-27 13:00:14.476713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.477012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x182a00 00:21:50.767 [2024-11-27 13:00:14.477033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.477054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.477074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x182a00 00:21:50.767 [2024-11-27 13:00:14.477094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182a00 00:21:50.767 [2024-11-27 13:00:14.477115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182a00 00:21:50.767 [2024-11-27 13:00:14.477136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.477156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.477176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.477199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x182a00 00:21:50.767 [2024-11-27 13:00:14.477220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.477240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x182a00 00:21:50.767 [2024-11-27 13:00:14.477260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x182a00 00:21:50.767 [2024-11-27 13:00:14.477281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x182a00 00:21:50.767 [2024-11-27 13:00:14.477301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.477321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.477341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.477362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.477382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.477402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.767 [2024-11-27 13:00:14.477422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:50.767 [2024-11-27 13:00:14.477435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.768 [2024-11-27 13:00:14.477444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.768 [2024-11-27 13:00:14.477455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182a00 00:21:50.768 [2024-11-27 13:00:14.477465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.768 [2024-11-27 13:00:14.477476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.768 [2024-11-27 13:00:14.477485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:50.768 [2024-11-27 13:00:14.477497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x182a00 00:21:50.768 [2024-11-27 13:00:14.477507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:50.768 [2024-11-27 13:00:14.477518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.768 [2024-11-27 13:00:14.477527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:50.768 [2024-11-27 13:00:14.477538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.768 [2024-11-27 13:00:14.477547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:50.768 [2024-11-27 13:00:14.477558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.768 [2024-11-27 13:00:14.477567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:50.768 [2024-11-27 13:00:14.477578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:58672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x182a00 00:21:50.768 [2024-11-27 13:00:14.477587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:50.768 [2024-11-27 13:00:14.477598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.768 [2024-11-27 13:00:14.477611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:50.768 [2024-11-27 13:00:14.477622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.768 [2024-11-27 13:00:14.477631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:50.768 [2024-11-27 13:00:14.477642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.768 [2024-11-27 13:00:14.477651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:50.768 15801.65 IOPS, 61.73 MiB/s [2024-11-27T12:00:17.153Z] 15908.26 IOPS, 62.14 MiB/s [2024-11-27T12:00:17.153Z] Received shutdown signal, test time was about 27.689425 seconds 00:21:50.768 00:21:50.768 Latency(us) 00:21:50.768 [2024-11-27T12:00:17.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.768 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:50.768 Verification LBA range: start 0x0 length 0x4000 00:21:50.768 Nvme0n1 : 27.69 15970.79 62.39 0.00 0.00 7995.24 83.97 3019898.88 00:21:50.768 [2024-11-27T12:00:17.153Z] =================================================================================================================== 00:21:50.768 [2024-11-27T12:00:17.153Z] Total : 15970.79 62.39 0.00 0.00 7995.24 83.97 3019898.88 00:21:50.768 13:00:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:50.768 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:50.768 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:50.768 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:50.768 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:50.768 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:21:50.768 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:50.768 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:50.768 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:21:50.768 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.768 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:50.768 rmmod nvme_rdma 00:21:50.768 rmmod nvme_fabrics 00:21:51.027 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.027 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:21:51.027 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:21:51.027 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 53407 ']' 00:21:51.027 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 53407 00:21:51.027 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 53407 ']' 00:21:51.027 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 53407 00:21:51.027 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:21:51.027 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.027 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 53407 00:21:51.027 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.027 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.027 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 53407' 00:21:51.027 killing process with pid 53407 00:21:51.027 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 53407 00:21:51.027 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 53407 00:21:51.284 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:51.284 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:51.284 00:21:51.284 real 0m40.544s 00:21:51.284 user 1m49.997s 00:21:51.284 sys 0m10.641s 00:21:51.284 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.284 13:00:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:51.284 ************************************ 00:21:51.284 END TEST nvmf_host_multipath_status 00:21:51.284 ************************************ 00:21:51.284 13:00:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:21:51.284 13:00:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:51.284 13:00:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.284 13:00:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.284 ************************************ 00:21:51.284 START TEST nvmf_discovery_remove_ifc 00:21:51.284 ************************************ 00:21:51.284 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:21:51.284 * Looking for test storage... 00:21:51.284 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:51.284 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:51.285 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:21:51.285 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:51.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.544 --rc genhtml_branch_coverage=1 00:21:51.544 --rc genhtml_function_coverage=1 00:21:51.544 --rc genhtml_legend=1 00:21:51.544 --rc geninfo_all_blocks=1 00:21:51.544 --rc geninfo_unexecuted_blocks=1 00:21:51.544 00:21:51.544 ' 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:51.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.544 --rc genhtml_branch_coverage=1 00:21:51.544 --rc genhtml_function_coverage=1 00:21:51.544 --rc genhtml_legend=1 00:21:51.544 --rc geninfo_all_blocks=1 00:21:51.544 --rc geninfo_unexecuted_blocks=1 00:21:51.544 00:21:51.544 ' 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:51.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.544 --rc genhtml_branch_coverage=1 00:21:51.544 --rc genhtml_function_coverage=1 00:21:51.544 --rc genhtml_legend=1 00:21:51.544 --rc geninfo_all_blocks=1 00:21:51.544 --rc geninfo_unexecuted_blocks=1 00:21:51.544 00:21:51.544 ' 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:51.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.544 --rc genhtml_branch_coverage=1 00:21:51.544 --rc genhtml_function_coverage=1 00:21:51.544 --rc genhtml_legend=1 00:21:51.544 --rc geninfo_all_blocks=1 00:21:51.544 --rc geninfo_unexecuted_blocks=1 00:21:51.544 00:21:51.544 ' 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.544 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:51.545 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:21:51.545 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:21:51.545 00:21:51.545 real 0m0.194s 00:21:51.545 user 0m0.096s 00:21:51.545 sys 0m0.113s 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:51.545 ************************************ 00:21:51.545 END TEST nvmf_discovery_remove_ifc 00:21:51.545 ************************************ 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.545 ************************************ 00:21:51.545 START TEST nvmf_identify_kernel_target 00:21:51.545 ************************************ 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:21:51.545 * Looking for test storage... 00:21:51.545 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:21:51.545 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:51.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.805 --rc genhtml_branch_coverage=1 00:21:51.805 --rc genhtml_function_coverage=1 00:21:51.805 --rc genhtml_legend=1 00:21:51.805 --rc geninfo_all_blocks=1 00:21:51.805 --rc geninfo_unexecuted_blocks=1 00:21:51.805 00:21:51.805 ' 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:51.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.805 --rc genhtml_branch_coverage=1 00:21:51.805 --rc genhtml_function_coverage=1 00:21:51.805 --rc genhtml_legend=1 00:21:51.805 --rc geninfo_all_blocks=1 00:21:51.805 --rc geninfo_unexecuted_blocks=1 00:21:51.805 00:21:51.805 ' 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:51.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.805 --rc genhtml_branch_coverage=1 00:21:51.805 --rc genhtml_function_coverage=1 00:21:51.805 --rc genhtml_legend=1 00:21:51.805 --rc geninfo_all_blocks=1 00:21:51.805 --rc geninfo_unexecuted_blocks=1 00:21:51.805 00:21:51.805 ' 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:51.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.805 --rc genhtml_branch_coverage=1 00:21:51.805 --rc genhtml_function_coverage=1 00:21:51.805 --rc genhtml_legend=1 00:21:51.805 --rc geninfo_all_blocks=1 00:21:51.805 --rc geninfo_unexecuted_blocks=1 00:21:51.805 00:21:51.805 ' 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.805 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:51.806 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.806 13:00:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.806 13:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:51.806 13:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:51.806 13:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:21:51.806 13:00:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:59.923 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:59.923 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:59.923 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:59.924 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:59.924 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:59.924 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:59.924 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:59.924 altname enp217s0f0np0 00:21:59.924 altname ens818f0np0 00:21:59.924 inet 192.168.100.8/24 scope global mlx_0_0 00:21:59.924 valid_lft forever preferred_lft forever 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:59.924 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:59.925 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:59.925 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:59.925 altname enp217s0f1np1 00:21:59.925 altname ens818f1np1 00:21:59.925 inet 192.168.100.9/24 scope global mlx_0_1 00:21:59.925 valid_lft forever preferred_lft forever 00:21:59.925 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:21:59.925 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:59.925 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:59.925 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:59.925 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:59.925 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:59.925 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:59.925 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:59.925 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:59.925 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:00.183 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:00.183 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:00.183 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:00.183 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:00.183 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:00.183 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:22:00.183 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:00.183 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:00.183 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:00.183 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:00.183 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:00.183 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:00.183 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:22:00.183 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:00.183 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:00.183 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:00.184 192.168.100.9' 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:00.184 192.168.100.9' 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:00.184 192.168.100.9' 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:00.184 13:00:26 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:22:04.372 Waiting for block devices as requested 00:22:04.372 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:04.372 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:04.372 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:04.372 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:04.372 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:04.372 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:04.372 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:04.372 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:04.372 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:04.630 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:04.630 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:04.630 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:04.889 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:04.889 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:04.889 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:05.149 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:05.149 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:05.408 No valid GPT data, bailing 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:22:05.408 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:05.409 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:22:05.668 00:22:05.668 Discovery Log Number of Records 2, Generation counter 2 00:22:05.668 =====Discovery Log Entry 0====== 00:22:05.668 trtype: rdma 00:22:05.668 adrfam: ipv4 00:22:05.668 subtype: current discovery subsystem 00:22:05.668 treq: not specified, sq flow control disable supported 00:22:05.668 portid: 1 00:22:05.668 trsvcid: 4420 00:22:05.668 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:05.668 traddr: 192.168.100.8 00:22:05.668 eflags: none 00:22:05.668 rdma_prtype: not specified 00:22:05.668 rdma_qptype: connected 00:22:05.668 rdma_cms: rdma-cm 00:22:05.668 rdma_pkey: 0x0000 00:22:05.668 =====Discovery Log Entry 1====== 00:22:05.668 trtype: rdma 00:22:05.668 adrfam: ipv4 00:22:05.668 subtype: nvme subsystem 00:22:05.668 treq: not specified, sq flow control disable supported 00:22:05.668 portid: 1 00:22:05.668 trsvcid: 4420 00:22:05.668 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:05.668 traddr: 192.168.100.8 00:22:05.668 eflags: none 00:22:05.668 rdma_prtype: not specified 00:22:05.668 rdma_qptype: connected 00:22:05.668 rdma_cms: rdma-cm 00:22:05.668 rdma_pkey: 0x0000 00:22:05.668 13:00:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:22:05.668 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:05.668 ===================================================== 00:22:05.668 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:05.668 ===================================================== 00:22:05.668 Controller Capabilities/Features 00:22:05.668 ================================ 00:22:05.668 Vendor ID: 0000 00:22:05.668 Subsystem Vendor ID: 0000 00:22:05.668 Serial Number: 6f288e7eefd98b93318e 00:22:05.668 Model Number: Linux 00:22:05.668 Firmware Version: 6.8.9-20 00:22:05.668 Recommended Arb Burst: 0 00:22:05.668 IEEE OUI Identifier: 00 00 00 00:22:05.668 Multi-path I/O 00:22:05.668 May have multiple subsystem ports: No 00:22:05.668 May have multiple controllers: No 00:22:05.668 Associated with SR-IOV VF: No 00:22:05.668 Max Data Transfer Size: Unlimited 00:22:05.668 Max Number of Namespaces: 0 00:22:05.668 Max Number of I/O Queues: 1024 00:22:05.668 NVMe Specification Version (VS): 1.3 00:22:05.668 NVMe Specification Version (Identify): 1.3 00:22:05.668 Maximum Queue Entries: 128 00:22:05.668 Contiguous Queues Required: No 00:22:05.668 Arbitration Mechanisms Supported 00:22:05.668 Weighted Round Robin: Not Supported 00:22:05.668 Vendor Specific: Not Supported 00:22:05.668 Reset Timeout: 7500 ms 00:22:05.668 Doorbell Stride: 4 bytes 00:22:05.668 NVM Subsystem Reset: Not Supported 00:22:05.668 Command Sets Supported 00:22:05.668 NVM Command Set: Supported 00:22:05.668 Boot Partition: Not Supported 00:22:05.668 Memory Page Size Minimum: 4096 bytes 00:22:05.668 Memory Page Size Maximum: 4096 bytes 00:22:05.668 Persistent Memory Region: Not Supported 00:22:05.668 Optional Asynchronous Events Supported 00:22:05.668 Namespace Attribute Notices: Not Supported 00:22:05.668 Firmware Activation Notices: Not Supported 00:22:05.668 ANA Change Notices: Not Supported 00:22:05.668 PLE Aggregate Log Change Notices: Not Supported 00:22:05.668 LBA Status Info Alert Notices: Not Supported 00:22:05.668 EGE Aggregate Log Change Notices: Not Supported 00:22:05.668 Normal NVM Subsystem Shutdown event: Not Supported 00:22:05.668 Zone Descriptor Change Notices: Not Supported 00:22:05.668 Discovery Log Change Notices: Supported 00:22:05.668 Controller Attributes 00:22:05.668 128-bit Host Identifier: Not Supported 00:22:05.668 Non-Operational Permissive Mode: Not Supported 00:22:05.668 NVM Sets: Not Supported 00:22:05.668 Read Recovery Levels: Not Supported 00:22:05.668 Endurance Groups: Not Supported 00:22:05.668 Predictable Latency Mode: Not Supported 00:22:05.668 Traffic Based Keep ALive: Not Supported 00:22:05.668 Namespace Granularity: Not Supported 00:22:05.668 SQ Associations: Not Supported 00:22:05.668 UUID List: Not Supported 00:22:05.668 Multi-Domain Subsystem: Not Supported 00:22:05.668 Fixed Capacity Management: Not Supported 00:22:05.668 Variable Capacity Management: Not Supported 00:22:05.668 Delete Endurance Group: Not Supported 00:22:05.668 Delete NVM Set: Not Supported 00:22:05.668 Extended LBA Formats Supported: Not Supported 00:22:05.668 Flexible Data Placement Supported: Not Supported 00:22:05.668 00:22:05.668 Controller Memory Buffer Support 00:22:05.668 ================================ 00:22:05.668 Supported: No 00:22:05.668 00:22:05.668 Persistent Memory Region Support 00:22:05.668 ================================ 00:22:05.668 Supported: No 00:22:05.668 00:22:05.668 Admin Command Set Attributes 00:22:05.668 ============================ 00:22:05.668 Security Send/Receive: Not Supported 00:22:05.668 Format NVM: Not Supported 00:22:05.668 Firmware Activate/Download: Not Supported 00:22:05.668 Namespace Management: Not Supported 00:22:05.668 Device Self-Test: Not Supported 00:22:05.668 Directives: Not Supported 00:22:05.668 NVMe-MI: Not Supported 00:22:05.668 Virtualization Management: Not Supported 00:22:05.668 Doorbell Buffer Config: Not Supported 00:22:05.668 Get LBA Status Capability: Not Supported 00:22:05.668 Command & Feature Lockdown Capability: Not Supported 00:22:05.668 Abort Command Limit: 1 00:22:05.668 Async Event Request Limit: 1 00:22:05.668 Number of Firmware Slots: N/A 00:22:05.668 Firmware Slot 1 Read-Only: N/A 00:22:05.668 Firmware Activation Without Reset: N/A 00:22:05.668 Multiple Update Detection Support: N/A 00:22:05.668 Firmware Update Granularity: No Information Provided 00:22:05.668 Per-Namespace SMART Log: No 00:22:05.668 Asymmetric Namespace Access Log Page: Not Supported 00:22:05.668 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:05.668 Command Effects Log Page: Not Supported 00:22:05.668 Get Log Page Extended Data: Supported 00:22:05.668 Telemetry Log Pages: Not Supported 00:22:05.668 Persistent Event Log Pages: Not Supported 00:22:05.668 Supported Log Pages Log Page: May Support 00:22:05.668 Commands Supported & Effects Log Page: Not Supported 00:22:05.668 Feature Identifiers & Effects Log Page:May Support 00:22:05.668 NVMe-MI Commands & Effects Log Page: May Support 00:22:05.668 Data Area 4 for Telemetry Log: Not Supported 00:22:05.668 Error Log Page Entries Supported: 1 00:22:05.668 Keep Alive: Not Supported 00:22:05.668 00:22:05.668 NVM Command Set Attributes 00:22:05.668 ========================== 00:22:05.668 Submission Queue Entry Size 00:22:05.668 Max: 1 00:22:05.668 Min: 1 00:22:05.668 Completion Queue Entry Size 00:22:05.668 Max: 1 00:22:05.668 Min: 1 00:22:05.668 Number of Namespaces: 0 00:22:05.668 Compare Command: Not Supported 00:22:05.668 Write Uncorrectable Command: Not Supported 00:22:05.668 Dataset Management Command: Not Supported 00:22:05.668 Write Zeroes Command: Not Supported 00:22:05.668 Set Features Save Field: Not Supported 00:22:05.668 Reservations: Not Supported 00:22:05.668 Timestamp: Not Supported 00:22:05.668 Copy: Not Supported 00:22:05.669 Volatile Write Cache: Not Present 00:22:05.669 Atomic Write Unit (Normal): 1 00:22:05.669 Atomic Write Unit (PFail): 1 00:22:05.669 Atomic Compare & Write Unit: 1 00:22:05.669 Fused Compare & Write: Not Supported 00:22:05.669 Scatter-Gather List 00:22:05.669 SGL Command Set: Supported 00:22:05.669 SGL Keyed: Supported 00:22:05.669 SGL Bit Bucket Descriptor: Not Supported 00:22:05.669 SGL Metadata Pointer: Not Supported 00:22:05.669 Oversized SGL: Not Supported 00:22:05.669 SGL Metadata Address: Not Supported 00:22:05.669 SGL Offset: Supported 00:22:05.669 Transport SGL Data Block: Not Supported 00:22:05.669 Replay Protected Memory Block: Not Supported 00:22:05.669 00:22:05.669 Firmware Slot Information 00:22:05.669 ========================= 00:22:05.669 Active slot: 0 00:22:05.669 00:22:05.669 00:22:05.669 Error Log 00:22:05.669 ========= 00:22:05.669 00:22:05.669 Active Namespaces 00:22:05.669 ================= 00:22:05.669 Discovery Log Page 00:22:05.669 ================== 00:22:05.669 Generation Counter: 2 00:22:05.669 Number of Records: 2 00:22:05.669 Record Format: 0 00:22:05.669 00:22:05.669 Discovery Log Entry 0 00:22:05.669 ---------------------- 00:22:05.669 Transport Type: 1 (RDMA) 00:22:05.669 Address Family: 1 (IPv4) 00:22:05.669 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:05.669 Entry Flags: 00:22:05.669 Duplicate Returned Information: 0 00:22:05.669 Explicit Persistent Connection Support for Discovery: 0 00:22:05.669 Transport Requirements: 00:22:05.669 Secure Channel: Not Specified 00:22:05.669 Port ID: 1 (0x0001) 00:22:05.669 Controller ID: 65535 (0xffff) 00:22:05.669 Admin Max SQ Size: 32 00:22:05.669 Transport Service Identifier: 4420 00:22:05.669 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:05.669 Transport Address: 192.168.100.8 00:22:05.669 Transport Specific Address Subtype - RDMA 00:22:05.669 RDMA QP Service Type: 1 (Reliable Connected) 00:22:05.669 RDMA Provider Type: 1 (No provider specified) 00:22:05.669 RDMA CM Service: 1 (RDMA_CM) 00:22:05.669 Discovery Log Entry 1 00:22:05.669 ---------------------- 00:22:05.669 Transport Type: 1 (RDMA) 00:22:05.669 Address Family: 1 (IPv4) 00:22:05.669 Subsystem Type: 2 (NVM Subsystem) 00:22:05.669 Entry Flags: 00:22:05.669 Duplicate Returned Information: 0 00:22:05.669 Explicit Persistent Connection Support for Discovery: 0 00:22:05.669 Transport Requirements: 00:22:05.669 Secure Channel: Not Specified 00:22:05.669 Port ID: 1 (0x0001) 00:22:05.669 Controller ID: 65535 (0xffff) 00:22:05.669 Admin Max SQ Size: 32 00:22:05.669 Transport Service Identifier: 4420 00:22:05.669 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:05.669 Transport Address: 192.168.100.8 00:22:05.669 Transport Specific Address Subtype - RDMA 00:22:05.669 RDMA QP Service Type: 1 (Reliable Connected) 00:22:05.669 RDMA Provider Type: 1 (No provider specified) 00:22:05.669 RDMA CM Service: 1 (RDMA_CM) 00:22:05.669 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:05.929 get_feature(0x01) failed 00:22:05.929 get_feature(0x02) failed 00:22:05.929 get_feature(0x04) failed 00:22:05.929 ===================================================== 00:22:05.929 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:22:05.929 ===================================================== 00:22:05.929 Controller Capabilities/Features 00:22:05.929 ================================ 00:22:05.929 Vendor ID: 0000 00:22:05.929 Subsystem Vendor ID: 0000 00:22:05.929 Serial Number: fbbca9579a530891f70a 00:22:05.929 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:05.929 Firmware Version: 6.8.9-20 00:22:05.929 Recommended Arb Burst: 6 00:22:05.929 IEEE OUI Identifier: 00 00 00 00:22:05.929 Multi-path I/O 00:22:05.929 May have multiple subsystem ports: Yes 00:22:05.929 May have multiple controllers: Yes 00:22:05.929 Associated with SR-IOV VF: No 00:22:05.929 Max Data Transfer Size: 1048576 00:22:05.929 Max Number of Namespaces: 1024 00:22:05.929 Max Number of I/O Queues: 128 00:22:05.929 NVMe Specification Version (VS): 1.3 00:22:05.929 NVMe Specification Version (Identify): 1.3 00:22:05.929 Maximum Queue Entries: 128 00:22:05.929 Contiguous Queues Required: No 00:22:05.929 Arbitration Mechanisms Supported 00:22:05.929 Weighted Round Robin: Not Supported 00:22:05.929 Vendor Specific: Not Supported 00:22:05.929 Reset Timeout: 7500 ms 00:22:05.929 Doorbell Stride: 4 bytes 00:22:05.929 NVM Subsystem Reset: Not Supported 00:22:05.929 Command Sets Supported 00:22:05.929 NVM Command Set: Supported 00:22:05.929 Boot Partition: Not Supported 00:22:05.929 Memory Page Size Minimum: 4096 bytes 00:22:05.929 Memory Page Size Maximum: 4096 bytes 00:22:05.929 Persistent Memory Region: Not Supported 00:22:05.929 Optional Asynchronous Events Supported 00:22:05.929 Namespace Attribute Notices: Supported 00:22:05.929 Firmware Activation Notices: Not Supported 00:22:05.929 ANA Change Notices: Supported 00:22:05.929 PLE Aggregate Log Change Notices: Not Supported 00:22:05.929 LBA Status Info Alert Notices: Not Supported 00:22:05.929 EGE Aggregate Log Change Notices: Not Supported 00:22:05.929 Normal NVM Subsystem Shutdown event: Not Supported 00:22:05.929 Zone Descriptor Change Notices: Not Supported 00:22:05.929 Discovery Log Change Notices: Not Supported 00:22:05.929 Controller Attributes 00:22:05.929 128-bit Host Identifier: Supported 00:22:05.929 Non-Operational Permissive Mode: Not Supported 00:22:05.929 NVM Sets: Not Supported 00:22:05.929 Read Recovery Levels: Not Supported 00:22:05.929 Endurance Groups: Not Supported 00:22:05.929 Predictable Latency Mode: Not Supported 00:22:05.929 Traffic Based Keep ALive: Supported 00:22:05.929 Namespace Granularity: Not Supported 00:22:05.929 SQ Associations: Not Supported 00:22:05.929 UUID List: Not Supported 00:22:05.929 Multi-Domain Subsystem: Not Supported 00:22:05.929 Fixed Capacity Management: Not Supported 00:22:05.929 Variable Capacity Management: Not Supported 00:22:05.929 Delete Endurance Group: Not Supported 00:22:05.929 Delete NVM Set: Not Supported 00:22:05.929 Extended LBA Formats Supported: Not Supported 00:22:05.929 Flexible Data Placement Supported: Not Supported 00:22:05.929 00:22:05.929 Controller Memory Buffer Support 00:22:05.929 ================================ 00:22:05.929 Supported: No 00:22:05.929 00:22:05.929 Persistent Memory Region Support 00:22:05.929 ================================ 00:22:05.929 Supported: No 00:22:05.929 00:22:05.929 Admin Command Set Attributes 00:22:05.929 ============================ 00:22:05.929 Security Send/Receive: Not Supported 00:22:05.929 Format NVM: Not Supported 00:22:05.929 Firmware Activate/Download: Not Supported 00:22:05.929 Namespace Management: Not Supported 00:22:05.929 Device Self-Test: Not Supported 00:22:05.929 Directives: Not Supported 00:22:05.929 NVMe-MI: Not Supported 00:22:05.929 Virtualization Management: Not Supported 00:22:05.929 Doorbell Buffer Config: Not Supported 00:22:05.929 Get LBA Status Capability: Not Supported 00:22:05.929 Command & Feature Lockdown Capability: Not Supported 00:22:05.929 Abort Command Limit: 4 00:22:05.929 Async Event Request Limit: 4 00:22:05.929 Number of Firmware Slots: N/A 00:22:05.929 Firmware Slot 1 Read-Only: N/A 00:22:05.929 Firmware Activation Without Reset: N/A 00:22:05.929 Multiple Update Detection Support: N/A 00:22:05.929 Firmware Update Granularity: No Information Provided 00:22:05.929 Per-Namespace SMART Log: Yes 00:22:05.929 Asymmetric Namespace Access Log Page: Supported 00:22:05.929 ANA Transition Time : 10 sec 00:22:05.929 00:22:05.929 Asymmetric Namespace Access Capabilities 00:22:05.929 ANA Optimized State : Supported 00:22:05.929 ANA Non-Optimized State : Supported 00:22:05.929 ANA Inaccessible State : Supported 00:22:05.929 ANA Persistent Loss State : Supported 00:22:05.929 ANA Change State : Supported 00:22:05.929 ANAGRPID is not changed : No 00:22:05.929 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:05.929 00:22:05.929 ANA Group Identifier Maximum : 128 00:22:05.929 Number of ANA Group Identifiers : 128 00:22:05.929 Max Number of Allowed Namespaces : 1024 00:22:05.929 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:05.929 Command Effects Log Page: Supported 00:22:05.929 Get Log Page Extended Data: Supported 00:22:05.929 Telemetry Log Pages: Not Supported 00:22:05.929 Persistent Event Log Pages: Not Supported 00:22:05.929 Supported Log Pages Log Page: May Support 00:22:05.929 Commands Supported & Effects Log Page: Not Supported 00:22:05.929 Feature Identifiers & Effects Log Page:May Support 00:22:05.929 NVMe-MI Commands & Effects Log Page: May Support 00:22:05.929 Data Area 4 for Telemetry Log: Not Supported 00:22:05.929 Error Log Page Entries Supported: 128 00:22:05.929 Keep Alive: Supported 00:22:05.929 Keep Alive Granularity: 1000 ms 00:22:05.929 00:22:05.929 NVM Command Set Attributes 00:22:05.929 ========================== 00:22:05.929 Submission Queue Entry Size 00:22:05.929 Max: 64 00:22:05.929 Min: 64 00:22:05.929 Completion Queue Entry Size 00:22:05.929 Max: 16 00:22:05.929 Min: 16 00:22:05.929 Number of Namespaces: 1024 00:22:05.929 Compare Command: Not Supported 00:22:05.929 Write Uncorrectable Command: Not Supported 00:22:05.929 Dataset Management Command: Supported 00:22:05.929 Write Zeroes Command: Supported 00:22:05.929 Set Features Save Field: Not Supported 00:22:05.929 Reservations: Not Supported 00:22:05.929 Timestamp: Not Supported 00:22:05.929 Copy: Not Supported 00:22:05.929 Volatile Write Cache: Present 00:22:05.929 Atomic Write Unit (Normal): 1 00:22:05.929 Atomic Write Unit (PFail): 1 00:22:05.929 Atomic Compare & Write Unit: 1 00:22:05.929 Fused Compare & Write: Not Supported 00:22:05.929 Scatter-Gather List 00:22:05.929 SGL Command Set: Supported 00:22:05.929 SGL Keyed: Supported 00:22:05.929 SGL Bit Bucket Descriptor: Not Supported 00:22:05.929 SGL Metadata Pointer: Not Supported 00:22:05.929 Oversized SGL: Not Supported 00:22:05.929 SGL Metadata Address: Not Supported 00:22:05.930 SGL Offset: Supported 00:22:05.930 Transport SGL Data Block: Not Supported 00:22:05.930 Replay Protected Memory Block: Not Supported 00:22:05.930 00:22:05.930 Firmware Slot Information 00:22:05.930 ========================= 00:22:05.930 Active slot: 0 00:22:05.930 00:22:05.930 Asymmetric Namespace Access 00:22:05.930 =========================== 00:22:05.930 Change Count : 0 00:22:05.930 Number of ANA Group Descriptors : 1 00:22:05.930 ANA Group Descriptor : 0 00:22:05.930 ANA Group ID : 1 00:22:05.930 Number of NSID Values : 1 00:22:05.930 Change Count : 0 00:22:05.930 ANA State : 1 00:22:05.930 Namespace Identifier : 1 00:22:05.930 00:22:05.930 Commands Supported and Effects 00:22:05.930 ============================== 00:22:05.930 Admin Commands 00:22:05.930 -------------- 00:22:05.930 Get Log Page (02h): Supported 00:22:05.930 Identify (06h): Supported 00:22:05.930 Abort (08h): Supported 00:22:05.930 Set Features (09h): Supported 00:22:05.930 Get Features (0Ah): Supported 00:22:05.930 Asynchronous Event Request (0Ch): Supported 00:22:05.930 Keep Alive (18h): Supported 00:22:05.930 I/O Commands 00:22:05.930 ------------ 00:22:05.930 Flush (00h): Supported 00:22:05.930 Write (01h): Supported LBA-Change 00:22:05.930 Read (02h): Supported 00:22:05.930 Write Zeroes (08h): Supported LBA-Change 00:22:05.930 Dataset Management (09h): Supported 00:22:05.930 00:22:05.930 Error Log 00:22:05.930 ========= 00:22:05.930 Entry: 0 00:22:05.930 Error Count: 0x3 00:22:05.930 Submission Queue Id: 0x0 00:22:05.930 Command Id: 0x5 00:22:05.930 Phase Bit: 0 00:22:05.930 Status Code: 0x2 00:22:05.930 Status Code Type: 0x0 00:22:05.930 Do Not Retry: 1 00:22:05.930 Error Location: 0x28 00:22:05.930 LBA: 0x0 00:22:05.930 Namespace: 0x0 00:22:05.930 Vendor Log Page: 0x0 00:22:05.930 ----------- 00:22:05.930 Entry: 1 00:22:05.930 Error Count: 0x2 00:22:05.930 Submission Queue Id: 0x0 00:22:05.930 Command Id: 0x5 00:22:05.930 Phase Bit: 0 00:22:05.930 Status Code: 0x2 00:22:05.930 Status Code Type: 0x0 00:22:05.930 Do Not Retry: 1 00:22:05.930 Error Location: 0x28 00:22:05.930 LBA: 0x0 00:22:05.930 Namespace: 0x0 00:22:05.930 Vendor Log Page: 0x0 00:22:05.930 ----------- 00:22:05.930 Entry: 2 00:22:05.930 Error Count: 0x1 00:22:05.930 Submission Queue Id: 0x0 00:22:05.930 Command Id: 0x0 00:22:05.930 Phase Bit: 0 00:22:05.930 Status Code: 0x2 00:22:05.930 Status Code Type: 0x0 00:22:05.930 Do Not Retry: 1 00:22:05.930 Error Location: 0x28 00:22:05.930 LBA: 0x0 00:22:05.930 Namespace: 0x0 00:22:05.930 Vendor Log Page: 0x0 00:22:05.930 00:22:05.930 Number of Queues 00:22:05.930 ================ 00:22:05.930 Number of I/O Submission Queues: 128 00:22:05.930 Number of I/O Completion Queues: 128 00:22:05.930 00:22:05.930 ZNS Specific Controller Data 00:22:05.930 ============================ 00:22:05.930 Zone Append Size Limit: 0 00:22:05.930 00:22:05.930 00:22:05.930 Active Namespaces 00:22:05.930 ================= 00:22:05.930 get_feature(0x05) failed 00:22:05.930 Namespace ID:1 00:22:05.930 Command Set Identifier: NVM (00h) 00:22:05.930 Deallocate: Supported 00:22:05.930 Deallocated/Unwritten Error: Not Supported 00:22:05.930 Deallocated Read Value: Unknown 00:22:05.930 Deallocate in Write Zeroes: Not Supported 00:22:05.930 Deallocated Guard Field: 0xFFFF 00:22:05.930 Flush: Supported 00:22:05.930 Reservation: Not Supported 00:22:05.930 Namespace Sharing Capabilities: Multiple Controllers 00:22:05.930 Size (in LBAs): 3907029168 (1863GiB) 00:22:05.930 Capacity (in LBAs): 3907029168 (1863GiB) 00:22:05.930 Utilization (in LBAs): 3907029168 (1863GiB) 00:22:05.930 UUID: e031deb0-e278-425d-b636-485c61436f1a 00:22:05.930 Thin Provisioning: Not Supported 00:22:05.930 Per-NS Atomic Units: Yes 00:22:05.930 Atomic Boundary Size (Normal): 0 00:22:05.930 Atomic Boundary Size (PFail): 0 00:22:05.930 Atomic Boundary Offset: 0 00:22:05.930 NGUID/EUI64 Never Reused: No 00:22:05.930 ANA group ID: 1 00:22:05.930 Namespace Write Protected: No 00:22:05.930 Number of LBA Formats: 1 00:22:05.930 Current LBA Format: LBA Format #00 00:22:05.930 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:05.930 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:05.930 rmmod nvme_rdma 00:22:05.930 rmmod nvme_fabrics 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:22:05.930 13:00:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:22:10.113 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:10.113 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:10.113 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:10.113 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:10.113 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:10.113 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:10.113 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:10.113 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:10.113 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:10.113 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:10.113 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:10.113 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:10.113 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:10.113 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:10.113 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:10.113 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:12.017 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:22:12.017 00:22:12.017 real 0m20.386s 00:22:12.017 user 0m5.388s 00:22:12.017 sys 0m12.207s 00:22:12.018 13:00:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.018 13:00:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.018 ************************************ 00:22:12.018 END TEST nvmf_identify_kernel_target 00:22:12.018 ************************************ 00:22:12.018 13:00:38 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:22:12.018 13:00:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:12.018 13:00:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.018 13:00:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.018 ************************************ 00:22:12.018 START TEST nvmf_auth_host 00:22:12.018 ************************************ 00:22:12.018 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:22:12.018 * Looking for test storage... 00:22:12.018 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:12.018 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:12.018 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:12.018 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:12.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.277 --rc genhtml_branch_coverage=1 00:22:12.277 --rc genhtml_function_coverage=1 00:22:12.277 --rc genhtml_legend=1 00:22:12.277 --rc geninfo_all_blocks=1 00:22:12.277 --rc geninfo_unexecuted_blocks=1 00:22:12.277 00:22:12.277 ' 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:12.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.277 --rc genhtml_branch_coverage=1 00:22:12.277 --rc genhtml_function_coverage=1 00:22:12.277 --rc genhtml_legend=1 00:22:12.277 --rc geninfo_all_blocks=1 00:22:12.277 --rc geninfo_unexecuted_blocks=1 00:22:12.277 00:22:12.277 ' 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:12.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.277 --rc genhtml_branch_coverage=1 00:22:12.277 --rc genhtml_function_coverage=1 00:22:12.277 --rc genhtml_legend=1 00:22:12.277 --rc geninfo_all_blocks=1 00:22:12.277 --rc geninfo_unexecuted_blocks=1 00:22:12.277 00:22:12.277 ' 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:12.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.277 --rc genhtml_branch_coverage=1 00:22:12.277 --rc genhtml_function_coverage=1 00:22:12.277 --rc genhtml_legend=1 00:22:12.277 --rc geninfo_all_blocks=1 00:22:12.277 --rc geninfo_unexecuted_blocks=1 00:22:12.277 00:22:12.277 ' 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:12.277 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:12.277 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:12.278 13:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:20.396 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:20.396 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:20.396 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:20.396 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:22:20.396 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:20.397 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:20.397 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:20.397 altname enp217s0f0np0 00:22:20.397 altname ens818f0np0 00:22:20.397 inet 192.168.100.8/24 scope global mlx_0_0 00:22:20.397 valid_lft forever preferred_lft forever 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:20.397 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:20.397 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:20.397 altname enp217s0f1np1 00:22:20.397 altname ens818f1np1 00:22:20.397 inet 192.168.100.9/24 scope global mlx_0_1 00:22:20.397 valid_lft forever preferred_lft forever 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:20.397 192.168.100.9' 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:20.397 192.168.100.9' 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:20.397 192.168.100.9' 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=71459 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 71459 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 71459 ']' 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.397 13:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6fed508375185f841cc231b5979a7299 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.2Mw 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6fed508375185f841cc231b5979a7299 0 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6fed508375185f841cc231b5979a7299 0 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6fed508375185f841cc231b5979a7299 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:22:21.334 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.2Mw 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.2Mw 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.2Mw 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=708ad8895f3235c5f5160b8d06ce4bf64ac560c290efffb1e4c5775de6bdf4ae 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.35p 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 708ad8895f3235c5f5160b8d06ce4bf64ac560c290efffb1e4c5775de6bdf4ae 3 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 708ad8895f3235c5f5160b8d06ce4bf64ac560c290efffb1e4c5775de6bdf4ae 3 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=708ad8895f3235c5f5160b8d06ce4bf64ac560c290efffb1e4c5775de6bdf4ae 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.35p 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.35p 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.35p 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=71b263d9524748d279f398017c09d5dc4c44e2e9ca05d780 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.AyJ 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 71b263d9524748d279f398017c09d5dc4c44e2e9ca05d780 0 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 71b263d9524748d279f398017c09d5dc4c44e2e9ca05d780 0 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=71b263d9524748d279f398017c09d5dc4c44e2e9ca05d780 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.AyJ 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.AyJ 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.AyJ 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=34ccc4c287c30936ab11eacf8bba01d80709b4683af7a731 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.OdH 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 34ccc4c287c30936ab11eacf8bba01d80709b4683af7a731 2 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 34ccc4c287c30936ab11eacf8bba01d80709b4683af7a731 2 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=34ccc4c287c30936ab11eacf8bba01d80709b4683af7a731 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.OdH 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.OdH 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.OdH 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0524d51f2af4a091f18e3b011660f0e7 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Xjo 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0524d51f2af4a091f18e3b011660f0e7 1 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0524d51f2af4a091f18e3b011660f0e7 1 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0524d51f2af4a091f18e3b011660f0e7 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:22:21.595 13:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Xjo 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Xjo 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Xjo 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ae1089e310d10f54aaee87f5fc2df4b9 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.eVE 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ae1089e310d10f54aaee87f5fc2df4b9 1 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ae1089e310d10f54aaee87f5fc2df4b9 1 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ae1089e310d10f54aaee87f5fc2df4b9 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.eVE 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.eVE 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.eVE 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0167a1d0642d1769d146a4991825ead654e0ba34ca1781f4 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.NHP 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0167a1d0642d1769d146a4991825ead654e0ba34ca1781f4 2 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0167a1d0642d1769d146a4991825ead654e0ba34ca1781f4 2 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0167a1d0642d1769d146a4991825ead654e0ba34ca1781f4 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.NHP 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.NHP 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.NHP 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fef94df37dfd87567324b4766a580b4f 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.OFB 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fef94df37dfd87567324b4766a580b4f 0 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fef94df37dfd87567324b4766a580b4f 0 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fef94df37dfd87567324b4766a580b4f 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.OFB 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.OFB 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.OFB 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:21.855 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=966af5d75756dae26c8a411a17f36efb90097a2619699fedf42bd6982f6123b8 00:22:21.856 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:22:21.856 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hE3 00:22:21.856 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 966af5d75756dae26c8a411a17f36efb90097a2619699fedf42bd6982f6123b8 3 00:22:21.856 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 966af5d75756dae26c8a411a17f36efb90097a2619699fedf42bd6982f6123b8 3 00:22:21.856 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:21.856 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:21.856 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=966af5d75756dae26c8a411a17f36efb90097a2619699fedf42bd6982f6123b8 00:22:21.856 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:22:21.856 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hE3 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hE3 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.hE3 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 71459 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 71459 ']' 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2Mw 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.35p ]] 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.35p 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.AyJ 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.114 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.OdH ]] 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OdH 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Xjo 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.eVE ]] 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eVE 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.NHP 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.OFB ]] 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.OFB 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.hE3 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:22.373 13:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:22:26.559 Waiting for block devices as requested 00:22:26.559 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:26.559 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:26.559 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:26.559 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:26.559 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:26.559 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:26.559 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:26.559 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:26.816 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:26.816 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:26.816 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:27.074 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:27.074 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:27.074 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:27.331 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:27.331 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:27.331 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:28.266 No valid GPT data, bailing 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:22:28.266 00:22:28.266 Discovery Log Number of Records 2, Generation counter 2 00:22:28.266 =====Discovery Log Entry 0====== 00:22:28.266 trtype: rdma 00:22:28.266 adrfam: ipv4 00:22:28.266 subtype: current discovery subsystem 00:22:28.266 treq: not specified, sq flow control disable supported 00:22:28.266 portid: 1 00:22:28.266 trsvcid: 4420 00:22:28.266 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:28.266 traddr: 192.168.100.8 00:22:28.266 eflags: none 00:22:28.266 rdma_prtype: not specified 00:22:28.266 rdma_qptype: connected 00:22:28.266 rdma_cms: rdma-cm 00:22:28.266 rdma_pkey: 0x0000 00:22:28.266 =====Discovery Log Entry 1====== 00:22:28.266 trtype: rdma 00:22:28.266 adrfam: ipv4 00:22:28.266 subtype: nvme subsystem 00:22:28.266 treq: not specified, sq flow control disable supported 00:22:28.266 portid: 1 00:22:28.266 trsvcid: 4420 00:22:28.266 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:28.266 traddr: 192.168.100.8 00:22:28.266 eflags: none 00:22:28.266 rdma_prtype: not specified 00:22:28.266 rdma_qptype: connected 00:22:28.266 rdma_cms: rdma-cm 00:22:28.266 rdma_pkey: 0x0000 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:28.266 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:22:28.267 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:28.267 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:28.267 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:28.267 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:28.267 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.267 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.525 nvme0n1 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.525 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: ]] 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.784 13:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.784 nvme0n1 00:22:28.784 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.784 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.784 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:28.784 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.784 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.043 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.305 nvme0n1 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.305 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.306 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:29.306 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:29.306 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:29.306 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:29.306 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.306 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.306 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:29.306 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:29.306 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:29.306 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:29.306 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:29.306 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.306 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.306 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.623 nvme0n1 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: ]] 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:22:29.623 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.624 13:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.934 nvme0n1 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.934 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.268 nvme0n1 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: ]] 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:30.268 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.269 nvme0n1 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.269 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.527 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.787 nvme0n1 00:22:30.787 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.787 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:30.787 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.787 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.787 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.787 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.787 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.787 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.787 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.787 13:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.787 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.046 nvme0n1 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: ]] 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.046 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.304 nvme0n1 00:22:31.304 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.304 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.304 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:31.304 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.304 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.304 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.304 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.305 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.564 nvme0n1 00:22:31.564 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.564 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.564 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.564 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.564 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:31.564 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.564 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.564 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.564 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.564 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: ]] 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.822 13:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.081 nvme0n1 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.081 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.339 nvme0n1 00:22:32.340 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.340 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.340 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.340 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.340 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.340 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.340 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.340 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.340 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.340 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.598 13:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.857 nvme0n1 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: ]] 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.857 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.115 nvme0n1 00:22:33.115 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.115 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.115 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.115 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.115 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.115 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.115 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.116 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.116 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.116 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:33.373 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.374 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.632 nvme0n1 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: ]] 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.632 13:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.199 nvme0n1 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.199 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.765 nvme0n1 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:34.765 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.766 13:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.024 nvme0n1 00:22:35.024 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.024 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.024 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.024 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.024 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.024 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.024 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.024 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.024 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.024 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: ]] 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.283 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.546 nvme0n1 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.546 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.805 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.805 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.805 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:35.805 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:35.805 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:35.805 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.805 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.805 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:35.805 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:35.805 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:35.805 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:35.805 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:35.805 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:35.805 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.805 13:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.064 nvme0n1 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: ]] 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.064 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.322 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.322 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.322 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:36.322 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:36.322 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:36.323 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.323 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.323 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:36.323 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:36.323 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:36.323 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:36.323 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:36.323 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.323 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.323 13:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.890 nvme0n1 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.890 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.456 nvme0n1 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:37.456 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:37.457 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.457 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.457 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:37.457 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:37.457 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:37.457 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:37.457 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:37.457 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.457 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.457 13:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.390 nvme0n1 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: ]] 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.390 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:38.391 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:38.391 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:38.391 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.391 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.391 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:38.391 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:38.391 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:38.391 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:38.391 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:38.391 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:38.391 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.391 13:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.958 nvme0n1 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.958 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.524 nvme0n1 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:39.524 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:39.525 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: ]] 00:22:39.525 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:39.525 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:22:39.525 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.525 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:39.525 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:39.525 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:39.525 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.525 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:39.525 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.525 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.783 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.783 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.783 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:39.783 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:39.783 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:39.783 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.783 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.783 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:39.783 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:39.783 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:39.783 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:39.783 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:39.783 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.783 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.783 13:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.783 nvme0n1 00:22:39.783 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.783 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.783 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:39.783 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.783 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.783 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.783 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.783 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.783 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.783 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.041 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.041 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.041 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:40.041 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.041 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.042 nvme0n1 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.042 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:40.300 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:40.301 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:40.301 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.301 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.301 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.559 nvme0n1 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: ]] 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.559 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.560 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:40.560 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:40.560 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:40.560 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:40.560 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:40.560 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:40.560 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.560 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.818 nvme0n1 00:22:40.818 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.818 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.818 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.818 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.818 13:01:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.818 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.077 nvme0n1 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: ]] 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.077 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.336 nvme0n1 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.336 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.594 nvme0n1 00:22:41.594 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.594 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.594 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.594 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.594 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.594 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.594 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.594 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.594 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.594 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.852 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.852 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.852 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:41.852 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.852 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:41.852 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:41.852 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:41.852 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:41.852 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:41.852 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:41.852 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:41.852 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:41.852 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:22:41.853 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:41.853 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:22:41.853 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.853 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:41.853 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:41.853 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:41.853 13:01:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.853 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.110 nvme0n1 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: ]] 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:42.110 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:42.111 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:42.111 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.111 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.368 nvme0n1 00:22:42.368 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.368 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.368 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.368 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.368 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.368 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.368 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.368 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.368 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.368 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.368 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.369 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.627 nvme0n1 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: ]] 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:42.627 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:42.628 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:42.628 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.628 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.628 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:42.628 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:42.628 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:42.628 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:42.628 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:42.628 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.628 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.628 13:01:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.886 nvme0n1 00:22:42.886 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.886 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.886 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.886 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.886 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.886 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.143 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.143 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.143 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.143 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.143 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.143 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.144 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.402 nvme0n1 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:43.402 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.403 13:01:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.661 nvme0n1 00:22:43.661 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.661 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.661 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.661 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.661 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.661 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: ]] 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.919 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.920 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.178 nvme0n1 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:44.178 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.179 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.437 nvme0n1 00:22:44.437 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.437 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.437 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.437 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.437 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.437 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: ]] 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.696 13:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.955 nvme0n1 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:44.955 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.213 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.472 nvme0n1 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:45.472 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:45.730 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.730 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.730 13:01:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.988 nvme0n1 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:45.988 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: ]] 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.989 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.555 nvme0n1 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.555 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.556 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:46.556 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:46.556 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:46.556 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:46.556 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:46.556 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:46.556 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.556 13:01:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.121 nvme0n1 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: ]] 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.121 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.122 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:47.122 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:47.122 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:47.122 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:47.122 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:47.122 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.122 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.122 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.688 nvme0n1 00:22:47.688 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.688 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.688 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.688 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.688 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.688 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.688 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.688 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.688 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.688 13:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.688 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.254 nvme0n1 00:22:48.254 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.254 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.254 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.254 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.254 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.512 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.512 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.512 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.512 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.512 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.512 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.512 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.512 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:48.512 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.512 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:48.512 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:48.512 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.513 13:01:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.080 nvme0n1 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: ]] 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.080 13:01:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.013 nvme0n1 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:50.013 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.014 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.579 nvme0n1 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: ]] 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:50.579 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.580 13:01:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.852 nvme0n1 00:22:50.852 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.852 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:50.852 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.852 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.852 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.852 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.852 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.852 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.852 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.852 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.852 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.852 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.853 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.111 nvme0n1 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.111 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.370 nvme0n1 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: ]] 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.370 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.628 nvme0n1 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:51.628 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:51.629 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:51.629 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:51.629 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.629 13:01:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.887 nvme0n1 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: ]] 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.887 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.145 nvme0n1 00:22:52.145 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.145 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.145 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:52.145 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.145 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.145 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:52.403 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:52.404 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:52.404 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.404 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.404 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.662 nvme0n1 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:52.662 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.663 13:01:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.921 nvme0n1 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: ]] 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.921 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.922 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:52.922 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:52.922 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:52.922 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:52.922 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.922 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.922 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:52.922 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:52.922 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:52.922 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:52.922 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:52.922 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:52.922 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.922 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.179 nvme0n1 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:22:53.179 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.180 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.438 nvme0n1 00:22:53.438 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.438 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.438 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.438 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.438 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.438 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.438 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.438 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.438 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.438 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: ]] 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:53.697 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:53.698 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:53.698 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:53.698 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:53.698 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.698 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.698 13:01:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.956 nvme0n1 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.956 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.214 nvme0n1 00:22:54.214 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.214 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.214 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.214 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.214 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.214 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.214 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.214 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.214 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.214 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.473 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.731 nvme0n1 00:22:54.731 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.731 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.731 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.731 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.731 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.731 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.731 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.731 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.731 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.731 13:01:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: ]] 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:54.731 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:54.732 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:54.732 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.732 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.990 nvme0n1 00:22:54.990 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.990 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.990 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.990 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.990 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.990 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:55.248 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.249 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.507 nvme0n1 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: ]] 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.507 13:01:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.073 nvme0n1 00:22:56.073 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.073 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:56.073 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.074 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.641 nvme0n1 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.641 13:01:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.899 nvme0n1 00:22:56.899 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.899 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:56.899 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:56.899 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.899 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.899 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: ]] 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:57.157 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.158 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.158 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.158 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.158 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:57.158 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:57.158 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:57.158 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.158 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.158 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:57.158 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:57.158 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:57.158 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:57.158 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:57.158 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:57.158 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.158 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.417 nvme0n1 00:22:57.417 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.417 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.417 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.417 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.417 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.417 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.417 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.417 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.417 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.417 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.676 13:01:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.934 nvme0n1 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZDUwODM3NTE4NWY4NDFjYzIzMWI1OTc5YTcyOTmnF1na: 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: ]] 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA4YWQ4ODk1ZjMyMzVjNWY1MTYwYjhkMDZjZTRiZjY0YWM1NjBjMjkwZWZmZmIxZTRjNTc3NWRlNmJkZjRhZXpvtBQ=: 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:57.934 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.935 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:57.935 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.935 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.192 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.192 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.192 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:58.192 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:58.192 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:58.192 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.192 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.192 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:58.192 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:58.192 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:58.192 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:58.192 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:58.192 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.192 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.192 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.757 nvme0n1 00:22:58.758 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.758 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.758 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.758 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.758 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.758 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.758 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.758 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.758 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.758 13:01:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.758 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.325 nvme0n1 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.325 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.583 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.583 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.583 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:59.583 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:59.583 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:59.583 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.583 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.583 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:59.583 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:59.583 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:59.583 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:59.583 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:59.583 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.583 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.583 13:01:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.150 nvme0n1 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE2N2ExZDA2NDJkMTc2OWQxNDZhNDk5MTgyNWVhZDY1NGUwYmEzNGNhMTc4MWY0U8S0cA==: 00:23:00.150 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: ]] 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVmOTRkZjM3ZGZkODc1NjczMjRiNDc2NmE1ODBiNGbGA5IN: 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.151 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.717 nvme0n1 00:23:00.717 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.717 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.717 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.717 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.717 13:01:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTY2YWY1ZDc1NzU2ZGFlMjZjOGE0MTFhMTdmMzZlZmI5MDA5N2EyNjE5Njk5ZmVkZjQyYmQ2OTgyZjYxMjNiOGFhLXQ=: 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.717 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.653 nvme0n1 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.653 request: 00:23:01.653 { 00:23:01.653 "name": "nvme0", 00:23:01.653 "trtype": "rdma", 00:23:01.653 "traddr": "192.168.100.8", 00:23:01.653 "adrfam": "ipv4", 00:23:01.653 "trsvcid": "4420", 00:23:01.653 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:01.653 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:01.653 "prchk_reftag": false, 00:23:01.653 "prchk_guard": false, 00:23:01.653 "hdgst": false, 00:23:01.653 "ddgst": false, 00:23:01.653 "allow_unrecognized_csi": false, 00:23:01.653 "method": "bdev_nvme_attach_controller", 00:23:01.653 "req_id": 1 00:23:01.653 } 00:23:01.653 Got JSON-RPC error response 00:23:01.653 response: 00:23:01.653 { 00:23:01.653 "code": -5, 00:23:01.653 "message": "Input/output error" 00:23:01.653 } 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:01.653 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:01.654 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:01.654 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.654 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:01.654 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.654 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:01.654 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.654 13:01:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.654 request: 00:23:01.654 { 00:23:01.654 "name": "nvme0", 00:23:01.654 "trtype": "rdma", 00:23:01.654 "traddr": "192.168.100.8", 00:23:01.654 "adrfam": "ipv4", 00:23:01.654 "trsvcid": "4420", 00:23:01.654 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:01.654 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:01.654 "prchk_reftag": false, 00:23:01.654 "prchk_guard": false, 00:23:01.654 "hdgst": false, 00:23:01.654 "ddgst": false, 00:23:01.654 "dhchap_key": "key2", 00:23:01.913 "allow_unrecognized_csi": false, 00:23:01.913 "method": "bdev_nvme_attach_controller", 00:23:01.913 "req_id": 1 00:23:01.913 } 00:23:01.913 Got JSON-RPC error response 00:23:01.913 response: 00:23:01.913 { 00:23:01.913 "code": -5, 00:23:01.913 "message": "Input/output error" 00:23:01.913 } 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.913 request: 00:23:01.913 { 00:23:01.913 "name": "nvme0", 00:23:01.913 "trtype": "rdma", 00:23:01.913 "traddr": "192.168.100.8", 00:23:01.913 "adrfam": "ipv4", 00:23:01.913 "trsvcid": "4420", 00:23:01.913 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:01.913 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:01.913 "prchk_reftag": false, 00:23:01.913 "prchk_guard": false, 00:23:01.913 "hdgst": false, 00:23:01.913 "ddgst": false, 00:23:01.913 "dhchap_key": "key1", 00:23:01.913 "dhchap_ctrlr_key": "ckey2", 00:23:01.913 "allow_unrecognized_csi": false, 00:23:01.913 "method": "bdev_nvme_attach_controller", 00:23:01.913 "req_id": 1 00:23:01.913 } 00:23:01.913 Got JSON-RPC error response 00:23:01.913 response: 00:23:01.913 { 00:23:01.913 "code": -5, 00:23:01.913 "message": "Input/output error" 00:23:01.913 } 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:01.913 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.914 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.173 nvme0n1 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:02.173 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.174 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:02.174 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.174 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.432 request: 00:23:02.432 { 00:23:02.432 "name": "nvme0", 00:23:02.432 "dhchap_key": "key1", 00:23:02.432 "dhchap_ctrlr_key": "ckey2", 00:23:02.432 "method": "bdev_nvme_set_keys", 00:23:02.432 "req_id": 1 00:23:02.432 } 00:23:02.432 Got JSON-RPC error response 00:23:02.432 response: 00:23:02.432 { 00:23:02.432 "code": -13, 00:23:02.432 "message": "Permission denied" 00:23:02.432 } 00:23:02.432 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:02.432 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:02.432 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:02.432 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:02.432 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:02.432 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.432 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.432 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.432 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:02.432 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.432 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:23:02.432 13:01:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:23:03.366 13:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.366 13:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.366 13:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.366 13:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:03.366 13:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.366 13:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:23:03.366 13:01:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:23:04.300 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.300 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:04.300 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.300 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiMjYzZDk1MjQ3NDhkMjc5ZjM5ODAxN2MwOWQ1ZGM0YzQ0ZTJlOWNhMDVkNzgwfrm2zw==: 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: ]] 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzRjY2M0YzI4N2MzMDkzNmFiMTFlYWNmOGJiYTAxZDgwNzA5YjQ2ODNhZjdhNzMxzu1KRg==: 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:04.558 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.559 nvme0n1 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUyNGQ1MWYyYWY0YTA5MWYxOGUzYjAxMTY2MGYwZTccjEur: 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: ]] 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWUxMDg5ZTMxMGQxMGY1NGFhZWU4N2Y1ZmMyZGY0YjkHb0G9: 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.559 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:04.817 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.817 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:04.817 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.817 13:01:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.817 request: 00:23:04.817 { 00:23:04.817 "name": "nvme0", 00:23:04.817 "dhchap_key": "key2", 00:23:04.817 "dhchap_ctrlr_key": "ckey1", 00:23:04.817 "method": "bdev_nvme_set_keys", 00:23:04.817 "req_id": 1 00:23:04.817 } 00:23:04.817 Got JSON-RPC error response 00:23:04.817 response: 00:23:04.817 { 00:23:04.817 "code": -13, 00:23:04.817 "message": "Permission denied" 00:23:04.817 } 00:23:04.817 13:01:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:04.817 13:01:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:04.817 13:01:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:04.817 13:01:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:04.817 13:01:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:04.817 13:01:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.817 13:01:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:04.817 13:01:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.817 13:01:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.817 13:01:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.817 13:01:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:23:04.817 13:01:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:23:05.749 13:01:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.749 13:01:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:05.749 13:01:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.749 13:01:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.749 13:01:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.749 13:01:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:23:05.749 13:01:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:07.122 rmmod nvme_rdma 00:23:07.122 rmmod nvme_fabrics 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 71459 ']' 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 71459 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 71459 ']' 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 71459 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71459 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71459' 00:23:07.122 killing process with pid 71459 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 71459 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 71459 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:23:07.122 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:07.123 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:07.123 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:07.123 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:07.123 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:07.123 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:23:07.123 13:01:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:11.307 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:11.307 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:11.307 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:11.307 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:11.307 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:11.307 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:11.307 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:11.307 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:11.307 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:11.307 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:11.307 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:11.307 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:11.307 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:11.307 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:11.307 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:11.307 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:13.211 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:23:13.469 13:01:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.2Mw /tmp/spdk.key-null.AyJ /tmp/spdk.key-sha256.Xjo /tmp/spdk.key-sha384.NHP /tmp/spdk.key-sha512.hE3 /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:23:13.469 13:01:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:17.661 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:23:17.661 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:23:17.661 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:23:17.661 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:23:17.661 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:23:17.661 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:23:17.661 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:23:17.661 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:23:17.661 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:23:17.661 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:23:17.661 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:23:17.661 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:23:17.661 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:23:17.661 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:23:17.661 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:23:17.661 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:23:17.661 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:17.661 00:23:17.661 real 1m5.533s 00:23:17.661 user 0m57.335s 00:23:17.661 sys 0m18.162s 00:23:17.661 13:01:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.661 13:01:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.661 ************************************ 00:23:17.661 END TEST nvmf_auth_host 00:23:17.661 ************************************ 00:23:17.661 13:01:43 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:23:17.661 13:01:43 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:23:17.661 13:01:43 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:23:17.661 13:01:43 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:23:17.661 13:01:43 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:17.661 13:01:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:17.661 13:01:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.661 13:01:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.661 ************************************ 00:23:17.661 START TEST nvmf_bdevperf 00:23:17.661 ************************************ 00:23:17.661 13:01:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:17.661 * Looking for test storage... 00:23:17.661 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:17.661 13:01:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:17.661 13:01:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:17.661 13:01:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:23:17.920 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:17.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.921 --rc genhtml_branch_coverage=1 00:23:17.921 --rc genhtml_function_coverage=1 00:23:17.921 --rc genhtml_legend=1 00:23:17.921 --rc geninfo_all_blocks=1 00:23:17.921 --rc geninfo_unexecuted_blocks=1 00:23:17.921 00:23:17.921 ' 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:17.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.921 --rc genhtml_branch_coverage=1 00:23:17.921 --rc genhtml_function_coverage=1 00:23:17.921 --rc genhtml_legend=1 00:23:17.921 --rc geninfo_all_blocks=1 00:23:17.921 --rc geninfo_unexecuted_blocks=1 00:23:17.921 00:23:17.921 ' 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:17.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.921 --rc genhtml_branch_coverage=1 00:23:17.921 --rc genhtml_function_coverage=1 00:23:17.921 --rc genhtml_legend=1 00:23:17.921 --rc geninfo_all_blocks=1 00:23:17.921 --rc geninfo_unexecuted_blocks=1 00:23:17.921 00:23:17.921 ' 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:17.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.921 --rc genhtml_branch_coverage=1 00:23:17.921 --rc genhtml_function_coverage=1 00:23:17.921 --rc genhtml_legend=1 00:23:17.921 --rc geninfo_all_blocks=1 00:23:17.921 --rc geninfo_unexecuted_blocks=1 00:23:17.921 00:23:17.921 ' 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.921 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:17.921 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:17.922 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.922 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:17.922 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:17.922 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:17.922 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.922 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.922 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.922 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:17.922 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:17.922 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:17.922 13:01:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:26.037 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:26.037 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:26.037 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:26.037 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:26.037 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:26.038 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:26.038 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:26.038 altname enp217s0f0np0 00:23:26.038 altname ens818f0np0 00:23:26.038 inet 192.168.100.8/24 scope global mlx_0_0 00:23:26.038 valid_lft forever preferred_lft forever 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:26.038 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:26.038 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:26.038 altname enp217s0f1np1 00:23:26.038 altname ens818f1np1 00:23:26.038 inet 192.168.100.9/24 scope global mlx_0_1 00:23:26.038 valid_lft forever preferred_lft forever 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:26.038 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:26.297 192.168.100.9' 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:26.297 192.168.100.9' 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:26.297 192.168.100.9' 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=88363 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 88363 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 88363 ']' 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.297 13:01:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:26.297 [2024-11-27 13:01:52.547602] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:23:26.297 [2024-11-27 13:01:52.547665] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.297 [2024-11-27 13:01:52.638702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:26.297 [2024-11-27 13:01:52.679091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.297 [2024-11-27 13:01:52.679131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.297 [2024-11-27 13:01:52.679141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.297 [2024-11-27 13:01:52.679149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.297 [2024-11-27 13:01:52.679156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.588 [2024-11-27 13:01:52.680646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.588 [2024-11-27 13:01:52.680731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:26.588 [2024-11-27 13:01:52.680733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.236 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.236 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:23:27.236 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:27.236 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.236 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:27.236 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.237 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:27.237 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.237 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:27.237 [2024-11-27 13:01:53.461504] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a50570/0x1a54a60) succeed. 00:23:27.237 [2024-11-27 13:01:53.470661] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a51b60/0x1a96100) succeed. 00:23:27.237 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.237 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:27.237 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.237 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:27.237 Malloc0 00:23:27.237 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.237 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:27.237 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.237 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:27.529 [2024-11-27 13:01:53.613273] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.529 { 00:23:27.529 "params": { 00:23:27.529 "name": "Nvme$subsystem", 00:23:27.529 "trtype": "$TEST_TRANSPORT", 00:23:27.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.529 "adrfam": "ipv4", 00:23:27.529 "trsvcid": "$NVMF_PORT", 00:23:27.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.529 "hdgst": ${hdgst:-false}, 00:23:27.529 "ddgst": ${ddgst:-false} 00:23:27.529 }, 00:23:27.529 "method": "bdev_nvme_attach_controller" 00:23:27.529 } 00:23:27.529 EOF 00:23:27.529 )") 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:23:27.529 13:01:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:27.529 "params": { 00:23:27.529 "name": "Nvme1", 00:23:27.529 "trtype": "rdma", 00:23:27.529 "traddr": "192.168.100.8", 00:23:27.529 "adrfam": "ipv4", 00:23:27.529 "trsvcid": "4420", 00:23:27.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.529 "hdgst": false, 00:23:27.529 "ddgst": false 00:23:27.529 }, 00:23:27.529 "method": "bdev_nvme_attach_controller" 00:23:27.529 }' 00:23:27.530 [2024-11-27 13:01:53.666264] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:23:27.530 [2024-11-27 13:01:53.666309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88507 ] 00:23:27.530 [2024-11-27 13:01:53.758026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.530 [2024-11-27 13:01:53.797507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.788 Running I/O for 1 seconds... 00:23:28.725 18248.00 IOPS, 71.28 MiB/s 00:23:28.725 Latency(us) 00:23:28.725 [2024-11-27T12:01:55.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.725 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:28.725 Verification LBA range: start 0x0 length 0x4000 00:23:28.725 Nvme1n1 : 1.01 18267.27 71.36 0.00 0.00 6969.15 2503.48 10747.90 00:23:28.725 [2024-11-27T12:01:55.110Z] =================================================================================================================== 00:23:28.725 [2024-11-27T12:01:55.110Z] Total : 18267.27 71.36 0.00 0.00 6969.15 2503.48 10747.90 00:23:28.983 13:01:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=88776 00:23:28.983 13:01:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:23:28.983 13:01:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:23:28.983 13:01:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:23:28.983 13:01:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:23:28.983 13:01:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:23:28.983 13:01:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.983 13:01:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.983 { 00:23:28.983 "params": { 00:23:28.983 "name": "Nvme$subsystem", 00:23:28.983 "trtype": "$TEST_TRANSPORT", 00:23:28.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.983 "adrfam": "ipv4", 00:23:28.983 "trsvcid": "$NVMF_PORT", 00:23:28.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.983 "hdgst": ${hdgst:-false}, 00:23:28.983 "ddgst": ${ddgst:-false} 00:23:28.983 }, 00:23:28.983 "method": "bdev_nvme_attach_controller" 00:23:28.983 } 00:23:28.983 EOF 00:23:28.983 )") 00:23:28.983 13:01:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:23:28.983 13:01:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:23:28.984 13:01:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:23:28.984 13:01:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:28.984 "params": { 00:23:28.984 "name": "Nvme1", 00:23:28.984 "trtype": "rdma", 00:23:28.984 "traddr": "192.168.100.8", 00:23:28.984 "adrfam": "ipv4", 00:23:28.984 "trsvcid": "4420", 00:23:28.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.984 "hdgst": false, 00:23:28.984 "ddgst": false 00:23:28.984 }, 00:23:28.984 "method": "bdev_nvme_attach_controller" 00:23:28.984 }' 00:23:28.984 [2024-11-27 13:01:55.212764] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:23:28.984 [2024-11-27 13:01:55.212822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88776 ] 00:23:28.984 [2024-11-27 13:01:55.304571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.984 [2024-11-27 13:01:55.340714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.242 Running I/O for 15 seconds... 00:23:31.554 18176.00 IOPS, 71.00 MiB/s [2024-11-27T12:01:58.197Z] 18258.00 IOPS, 71.32 MiB/s [2024-11-27T12:01:58.197Z] 13:01:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 88363 00:23:31.812 13:01:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:23:32.945 16128.00 IOPS, 63.00 MiB/s [2024-11-27T12:01:59.330Z] [2024-11-27 13:01:59.200099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.945 [2024-11-27 13:01:59.200140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:cff200 sqhd:8e00 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.200153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.945 [2024-11-27 13:01:59.200163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:cff200 sqhd:8e00 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.200173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.945 [2024-11-27 13:01:59.200182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:cff200 sqhd:8e00 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.200191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.945 [2024-11-27 13:01:59.200200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:cff200 sqhd:8e00 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:23:32.945 [2024-11-27 13:01:59.202175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:32.945 [2024-11-27 13:01:59.202197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.202963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.202972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.203002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.203012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.203041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.203050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.203081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.203091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.945 [2024-11-27 13:01:59.203120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.945 [2024-11-27 13:01:59.203129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.203984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.203993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.204962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.204972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.946 [2024-11-27 13:01:59.205651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.946 [2024-11-27 13:01:59.205660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.205692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.205701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.205733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.205743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.205784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.205794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.205824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.205834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.205863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.205872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.205902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.205913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.205942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.205952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.205982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.205991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.206963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.206974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.207003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.207012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.207042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.207051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.207080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.207089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.207118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.207128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.207157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.207166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.207195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.947 [2024-11-27 13:01:59.207205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.207235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x180e00 00:23:32.947 [2024-11-27 13:01:59.207244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.207278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x180e00 00:23:32.947 [2024-11-27 13:01:59.207287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dcc000 sqhd:7210 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.221700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.947 [2024-11-27 13:01:59.221718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.947 [2024-11-27 13:01:59.221727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124944 len:8 PRP1 0x0 PRP2 0x0 00:23:32.947 [2024-11-27 13:01:59.221737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.947 [2024-11-27 13:01:59.221816] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:23:32.947 [2024-11-27 13:01:59.221843] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:23:32.947 [2024-11-27 13:01:59.224525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:32.947 [2024-11-27 13:01:59.227378] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:32.947 [2024-11-27 13:01:59.227398] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:32.947 [2024-11-27 13:01:59.227406] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:23:34.140 12096.00 IOPS, 47.25 MiB/s [2024-11-27T12:02:00.525Z] [2024-11-27 13:02:00.231675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:23:34.140 [2024-11-27 13:02:00.231747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:34.140 [2024-11-27 13:02:00.232056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:34.140 [2024-11-27 13:02:00.232068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:34.140 [2024-11-27 13:02:00.232077] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:23:34.140 [2024-11-27 13:02:00.232090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:34.140 [2024-11-27 13:02:00.239744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:34.140 [2024-11-27 13:02:00.243190] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:34.140 [2024-11-27 13:02:00.243212] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:34.140 [2024-11-27 13:02:00.243221] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:23:34.965 9676.80 IOPS, 37.80 MiB/s [2024-11-27T12:02:01.350Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 88363 Killed "${NVMF_APP[@]}" "$@" 00:23:34.966 13:02:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:23:34.966 13:02:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:34.966 13:02:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:34.966 13:02:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.966 13:02:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:34.966 13:02:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=89845 00:23:34.966 13:02:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 89845 00:23:34.966 13:02:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:34.966 13:02:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 89845 ']' 00:23:34.966 13:02:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.966 13:02:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.966 13:02:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.966 13:02:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.966 13:02:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:34.966 [2024-11-27 13:02:01.233245] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:23:34.966 [2024-11-27 13:02:01.233292] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.966 [2024-11-27 13:02:01.247110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:23:34.966 [2024-11-27 13:02:01.247136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:34.966 [2024-11-27 13:02:01.247310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:34.966 [2024-11-27 13:02:01.247321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:34.966 [2024-11-27 13:02:01.247330] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:23:34.966 [2024-11-27 13:02:01.247343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:34.966 [2024-11-27 13:02:01.253240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:34.966 [2024-11-27 13:02:01.255891] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:34.966 [2024-11-27 13:02:01.255912] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:34.966 [2024-11-27 13:02:01.255920] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:23:34.966 [2024-11-27 13:02:01.325180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:35.224 [2024-11-27 13:02:01.365711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.224 [2024-11-27 13:02:01.365749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.224 [2024-11-27 13:02:01.365758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.224 [2024-11-27 13:02:01.365767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.224 [2024-11-27 13:02:01.365774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.224 [2024-11-27 13:02:01.367206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.224 [2024-11-27 13:02:01.367289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.224 [2024-11-27 13:02:01.367291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.790 8064.00 IOPS, 31.50 MiB/s [2024-11-27T12:02:02.175Z] 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.790 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:23:35.790 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.790 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.790 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:35.790 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.790 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:35.790 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.790 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:35.790 [2024-11-27 13:02:02.141068] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2066570/0x206aa60) succeed. 00:23:35.790 [2024-11-27 13:02:02.150254] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2067b60/0x20ac100) succeed. 00:23:36.049 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.049 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:36.049 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.049 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:36.049 [2024-11-27 13:02:02.260051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:23:36.049 [2024-11-27 13:02:02.260083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:36.049 [2024-11-27 13:02:02.260260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:36.049 [2024-11-27 13:02:02.260271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:36.049 [2024-11-27 13:02:02.260281] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:23:36.049 [2024-11-27 13:02:02.260294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:36.049 [2024-11-27 13:02:02.268396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:36.049 [2024-11-27 13:02:02.271267] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:36.049 [2024-11-27 13:02:02.271292] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:36.049 [2024-11-27 13:02:02.271300] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:23:36.049 Malloc0 00:23:36.049 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.049 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:36.049 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.049 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:36.049 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.050 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:36.050 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.050 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:36.050 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.050 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:36.050 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.050 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:36.050 [2024-11-27 13:02:02.297460] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:36.050 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.050 13:02:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 88776 00:23:37.243 6912.00 IOPS, 27.00 MiB/s [2024-11-27T12:02:03.628Z] [2024-11-27 13:02:03.275266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:23:37.243 [2024-11-27 13:02:03.275290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:37.243 [2024-11-27 13:02:03.275465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:37.243 [2024-11-27 13:02:03.275475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:37.243 [2024-11-27 13:02:03.275490] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:23:37.243 [2024-11-27 13:02:03.275501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:37.243 [2024-11-27 13:02:03.282982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:37.243 [2024-11-27 13:02:03.318630] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:23:38.178 6551.75 IOPS, 25.59 MiB/s [2024-11-27T12:02:05.939Z] 7863.56 IOPS, 30.72 MiB/s [2024-11-27T12:02:06.875Z] 8915.20 IOPS, 34.83 MiB/s [2024-11-27T12:02:07.810Z] 9773.73 IOPS, 38.18 MiB/s [2024-11-27T12:02:08.742Z] 10490.67 IOPS, 40.98 MiB/s [2024-11-27T12:02:09.676Z] 11095.85 IOPS, 43.34 MiB/s [2024-11-27T12:02:10.611Z] 11616.00 IOPS, 45.38 MiB/s [2024-11-27T12:02:10.611Z] 12061.87 IOPS, 47.12 MiB/s 00:23:44.226 Latency(us) 00:23:44.226 [2024-11-27T12:02:10.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.226 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:44.226 Verification LBA range: start 0x0 length 0x4000 00:23:44.226 Nvme1n1 : 15.01 12061.64 47.12 13672.26 0.00 4953.77 350.62 1053609.16 00:23:44.226 [2024-11-27T12:02:10.611Z] =================================================================================================================== 00:23:44.226 [2024-11-27T12:02:10.611Z] Total : 12061.64 47.12 13672.26 0.00 4953.77 350.62 1053609.16 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:44.485 rmmod nvme_rdma 00:23:44.485 rmmod nvme_fabrics 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 89845 ']' 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 89845 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 89845 ']' 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 89845 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.485 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89845 00:23:44.744 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:44.744 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:44.744 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89845' 00:23:44.744 killing process with pid 89845 00:23:44.744 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 89845 00:23:44.744 13:02:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 89845 00:23:44.744 13:02:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:44.744 13:02:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:44.744 00:23:44.744 real 0m27.242s 00:23:44.744 user 1m5.001s 00:23:44.744 sys 0m7.670s 00:23:44.744 13:02:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.744 13:02:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:44.744 ************************************ 00:23:44.744 END TEST nvmf_bdevperf 00:23:44.744 ************************************ 00:23:45.002 13:02:11 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:23:45.002 13:02:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:45.002 13:02:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:45.002 13:02:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.002 ************************************ 00:23:45.002 START TEST nvmf_target_disconnect 00:23:45.002 ************************************ 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:23:45.003 * Looking for test storage... 00:23:45.003 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:45.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.003 --rc genhtml_branch_coverage=1 00:23:45.003 --rc genhtml_function_coverage=1 00:23:45.003 --rc genhtml_legend=1 00:23:45.003 --rc geninfo_all_blocks=1 00:23:45.003 --rc geninfo_unexecuted_blocks=1 00:23:45.003 00:23:45.003 ' 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:45.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.003 --rc genhtml_branch_coverage=1 00:23:45.003 --rc genhtml_function_coverage=1 00:23:45.003 --rc genhtml_legend=1 00:23:45.003 --rc geninfo_all_blocks=1 00:23:45.003 --rc geninfo_unexecuted_blocks=1 00:23:45.003 00:23:45.003 ' 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:45.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.003 --rc genhtml_branch_coverage=1 00:23:45.003 --rc genhtml_function_coverage=1 00:23:45.003 --rc genhtml_legend=1 00:23:45.003 --rc geninfo_all_blocks=1 00:23:45.003 --rc geninfo_unexecuted_blocks=1 00:23:45.003 00:23:45.003 ' 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:45.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.003 --rc genhtml_branch_coverage=1 00:23:45.003 --rc genhtml_function_coverage=1 00:23:45.003 --rc genhtml_legend=1 00:23:45.003 --rc geninfo_all_blocks=1 00:23:45.003 --rc geninfo_unexecuted_blocks=1 00:23:45.003 00:23:45.003 ' 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.003 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.262 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:45.262 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:45.262 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.262 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.262 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:45.263 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:23:45.263 13:02:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:53.410 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:53.410 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:53.410 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:53.410 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:53.410 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:53.411 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:53.411 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:53.411 altname enp217s0f0np0 00:23:53.411 altname ens818f0np0 00:23:53.411 inet 192.168.100.8/24 scope global mlx_0_0 00:23:53.411 valid_lft forever preferred_lft forever 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:53.411 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:53.411 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:53.411 altname enp217s0f1np1 00:23:53.411 altname ens818f1np1 00:23:53.411 inet 192.168.100.9/24 scope global mlx_0_1 00:23:53.411 valid_lft forever preferred_lft forever 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:53.411 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:53.670 192.168.100.9' 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:53.670 192.168.100.9' 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:53.670 192.168.100.9' 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:53.670 ************************************ 00:23:53.670 START TEST nvmf_target_disconnect_tc1 00:23:53.670 ************************************ 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:23:53.670 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.671 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:23:53.671 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.671 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:23:53.671 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.671 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:23:53.671 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:23:53.671 13:02:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:53.671 [2024-11-27 13:02:20.037741] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:53.671 [2024-11-27 13:02:20.037786] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:53.671 [2024-11-27 13:02:20.037796] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:23:55.046 [2024-11-27 13:02:21.041634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:23:55.046 [2024-11-27 13:02:21.041692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:23:55.046 [2024-11-27 13:02:21.041703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:23:55.046 [2024-11-27 13:02:21.041733] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:55.046 [2024-11-27 13:02:21.041744] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:23:55.046 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:23:55.046 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:23:55.046 Initializing NVMe Controllers 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:55.046 00:23:55.046 real 0m1.166s 00:23:55.046 user 0m0.887s 00:23:55.046 sys 0m0.268s 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:55.046 ************************************ 00:23:55.046 END TEST nvmf_target_disconnect_tc1 00:23:55.046 ************************************ 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:55.046 ************************************ 00:23:55.046 START TEST nvmf_target_disconnect_tc2 00:23:55.046 ************************************ 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=95763 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 95763 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 95763 ']' 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.046 13:02:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.046 [2024-11-27 13:02:21.199205] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:23:55.046 [2024-11-27 13:02:21.199254] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.046 [2024-11-27 13:02:21.303976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:55.046 [2024-11-27 13:02:21.342516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.046 [2024-11-27 13:02:21.342559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.046 [2024-11-27 13:02:21.342569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.047 [2024-11-27 13:02:21.342577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.047 [2024-11-27 13:02:21.342584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.047 [2024-11-27 13:02:21.344246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:55.047 [2024-11-27 13:02:21.344357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:55.047 [2024-11-27 13:02:21.344466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:55.047 [2024-11-27 13:02:21.344468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:23:55.982 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.982 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:55.982 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.982 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.982 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.982 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.982 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:55.982 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.982 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.982 Malloc0 00:23:55.982 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.983 [2024-11-27 13:02:22.142863] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x181cc30/0x1828c00) succeed. 00:23:55.983 [2024-11-27 13:02:22.152535] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x181e2c0/0x186a2a0) succeed. 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.983 [2024-11-27 13:02:22.292196] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=95959 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:23:55.983 13:02:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:58.514 13:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 95763 00:23:58.514 13:02:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:23:59.448 Read completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Write completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Write completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Read completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Write completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Read completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Read completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Write completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Read completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Read completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Write completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Read completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Write completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Read completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Write completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Read completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Read completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Write completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Read completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Write completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Write completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Write completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Read completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Read completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Write completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Write completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Write completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Read completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Read completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.448 Read completed with error (sct=0, sc=8) 00:23:59.448 starting I/O failed 00:23:59.449 Write completed with error (sct=0, sc=8) 00:23:59.449 starting I/O failed 00:23:59.449 Write completed with error (sct=0, sc=8) 00:23:59.449 starting I/O failed 00:23:59.449 [2024-11-27 13:02:25.516717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.015 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 95763 Killed "${NVMF_APP[@]}" "$@" 00:24:00.015 13:02:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:24:00.015 13:02:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:00.015 13:02:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.015 13:02:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.015 13:02:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:00.015 13:02:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=96750 00:24:00.015 13:02:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 96750 00:24:00.015 13:02:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 96750 ']' 00:24:00.015 13:02:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.015 13:02:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.015 13:02:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.015 13:02:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.015 13:02:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:00.015 13:02:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:00.015 [2024-11-27 13:02:26.367199] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:24:00.015 [2024-11-27 13:02:26.367249] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.274 [2024-11-27 13:02:26.455225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:00.274 [2024-11-27 13:02:26.493626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.274 [2024-11-27 13:02:26.493669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.274 [2024-11-27 13:02:26.493679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.274 [2024-11-27 13:02:26.493687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.274 [2024-11-27 13:02:26.493694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.274 [2024-11-27 13:02:26.495557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:00.274 [2024-11-27 13:02:26.495667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:00.274 [2024-11-27 13:02:26.495774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:00.274 [2024-11-27 13:02:26.495774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Read completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Read completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Read completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Read completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Read completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Read completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Read completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Read completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Read completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Read completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Read completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Write completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 Read completed with error (sct=0, sc=8) 00:24:00.274 starting I/O failed 00:24:00.274 [2024-11-27 13:02:26.521985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.842 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.842 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:00.842 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:00.842 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:00.842 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:01.100 Malloc0 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:01.100 [2024-11-27 13:02:27.290364] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe02c30/0xe0ec00) succeed. 00:24:01.100 [2024-11-27 13:02:27.299786] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe042c0/0xe502a0) succeed. 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:01.100 [2024-11-27 13:02:27.438284] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.100 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:01.101 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.101 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:01.101 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.101 13:02:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 95959 00:24:01.359 Write completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Write completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Write completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Write completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Read completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Read completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Write completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Write completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Read completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Read completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Write completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Write completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Read completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Read completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Read completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Write completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Read completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Write completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Write completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Read completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Read completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Read completed with error (sct=0, sc=8) 00:24:01.359 starting I/O failed 00:24:01.359 Read completed with error (sct=0, sc=8) 00:24:01.360 starting I/O failed 00:24:01.360 Write completed with error (sct=0, sc=8) 00:24:01.360 starting I/O failed 00:24:01.360 Read completed with error (sct=0, sc=8) 00:24:01.360 starting I/O failed 00:24:01.360 Write completed with error (sct=0, sc=8) 00:24:01.360 starting I/O failed 00:24:01.360 Read completed with error (sct=0, sc=8) 00:24:01.360 starting I/O failed 00:24:01.360 Write completed with error (sct=0, sc=8) 00:24:01.360 starting I/O failed 00:24:01.360 Read completed with error (sct=0, sc=8) 00:24:01.360 starting I/O failed 00:24:01.360 Read completed with error (sct=0, sc=8) 00:24:01.360 starting I/O failed 00:24:01.360 Read completed with error (sct=0, sc=8) 00:24:01.360 starting I/O failed 00:24:01.360 Read completed with error (sct=0, sc=8) 00:24:01.360 starting I/O failed 00:24:01.360 [2024-11-27 13:02:27.526910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.360 [2024-11-27 13:02:27.536445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.360 [2024-11-27 13:02:27.536494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.360 [2024-11-27 13:02:27.536515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.360 [2024-11-27 13:02:27.536526] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.360 [2024-11-27 13:02:27.536536] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.360 [2024-11-27 13:02:27.546592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.360 qpair failed and we were unable to recover it. 00:24:01.360 [2024-11-27 13:02:27.556517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.360 [2024-11-27 13:02:27.556564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.360 [2024-11-27 13:02:27.556584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.360 [2024-11-27 13:02:27.556594] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.360 [2024-11-27 13:02:27.556603] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.360 [2024-11-27 13:02:27.566909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.360 qpair failed and we were unable to recover it. 00:24:01.360 [2024-11-27 13:02:27.576656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.360 [2024-11-27 13:02:27.576694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.360 [2024-11-27 13:02:27.576713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.360 [2024-11-27 13:02:27.576726] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.360 [2024-11-27 13:02:27.576735] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.360 [2024-11-27 13:02:27.586807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.360 qpair failed and we were unable to recover it. 00:24:01.360 [2024-11-27 13:02:27.596496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.360 [2024-11-27 13:02:27.596539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.360 [2024-11-27 13:02:27.596558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.360 [2024-11-27 13:02:27.596568] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.360 [2024-11-27 13:02:27.596577] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.360 [2024-11-27 13:02:27.607088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.360 qpair failed and we were unable to recover it. 00:24:01.360 [2024-11-27 13:02:27.616782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.360 [2024-11-27 13:02:27.616831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.360 [2024-11-27 13:02:27.616849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.360 [2024-11-27 13:02:27.616859] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.360 [2024-11-27 13:02:27.616868] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.360 [2024-11-27 13:02:27.627107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.360 qpair failed and we were unable to recover it. 00:24:01.360 [2024-11-27 13:02:27.636755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.360 [2024-11-27 13:02:27.636794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.360 [2024-11-27 13:02:27.636812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.360 [2024-11-27 13:02:27.636822] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.360 [2024-11-27 13:02:27.636831] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.360 [2024-11-27 13:02:27.647130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.360 qpair failed and we were unable to recover it. 00:24:01.360 [2024-11-27 13:02:27.656640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.360 [2024-11-27 13:02:27.656678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.360 [2024-11-27 13:02:27.656697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.360 [2024-11-27 13:02:27.656706] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.360 [2024-11-27 13:02:27.656715] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.360 [2024-11-27 13:02:27.667258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.360 qpair failed and we were unable to recover it. 00:24:01.360 [2024-11-27 13:02:27.676885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.360 [2024-11-27 13:02:27.676929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.360 [2024-11-27 13:02:27.676948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.360 [2024-11-27 13:02:27.676958] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.360 [2024-11-27 13:02:27.676966] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.360 [2024-11-27 13:02:27.687137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.360 qpair failed and we were unable to recover it. 00:24:01.360 [2024-11-27 13:02:27.696781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.360 [2024-11-27 13:02:27.696826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.360 [2024-11-27 13:02:27.696844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.360 [2024-11-27 13:02:27.696854] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.360 [2024-11-27 13:02:27.696863] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.360 [2024-11-27 13:02:27.707176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.360 qpair failed and we were unable to recover it. 00:24:01.360 [2024-11-27 13:02:27.717011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.360 [2024-11-27 13:02:27.717056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.360 [2024-11-27 13:02:27.717074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.360 [2024-11-27 13:02:27.717084] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.360 [2024-11-27 13:02:27.717092] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.360 [2024-11-27 13:02:27.727336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.360 qpair failed and we were unable to recover it. 00:24:01.360 [2024-11-27 13:02:27.737011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.360 [2024-11-27 13:02:27.737050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.360 [2024-11-27 13:02:27.737068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.360 [2024-11-27 13:02:27.737078] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.360 [2024-11-27 13:02:27.737086] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.619 [2024-11-27 13:02:27.747577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.619 qpair failed and we were unable to recover it. 00:24:01.619 [2024-11-27 13:02:27.757261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.619 [2024-11-27 13:02:27.757305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.619 [2024-11-27 13:02:27.757323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.619 [2024-11-27 13:02:27.757333] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.619 [2024-11-27 13:02:27.757342] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.619 [2024-11-27 13:02:27.767339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.619 qpair failed and we were unable to recover it. 00:24:01.619 [2024-11-27 13:02:27.777097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.619 [2024-11-27 13:02:27.777140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.619 [2024-11-27 13:02:27.777159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.619 [2024-11-27 13:02:27.777168] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.619 [2024-11-27 13:02:27.777177] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.619 [2024-11-27 13:02:27.787620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.619 qpair failed and we were unable to recover it. 00:24:01.619 [2024-11-27 13:02:27.797391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.619 [2024-11-27 13:02:27.797430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.619 [2024-11-27 13:02:27.797448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.619 [2024-11-27 13:02:27.797458] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.619 [2024-11-27 13:02:27.797467] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.619 [2024-11-27 13:02:27.807665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.619 qpair failed and we were unable to recover it. 00:24:01.619 [2024-11-27 13:02:27.817256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.619 [2024-11-27 13:02:27.817304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.619 [2024-11-27 13:02:27.817322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.619 [2024-11-27 13:02:27.817332] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.619 [2024-11-27 13:02:27.817341] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.619 [2024-11-27 13:02:27.827682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.619 qpair failed and we were unable to recover it. 00:24:01.619 [2024-11-27 13:02:27.837388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.619 [2024-11-27 13:02:27.837431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.619 [2024-11-27 13:02:27.837453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.619 [2024-11-27 13:02:27.837463] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.619 [2024-11-27 13:02:27.837472] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.619 [2024-11-27 13:02:27.847597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.619 qpair failed and we were unable to recover it. 00:24:01.619 [2024-11-27 13:02:27.857352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.619 [2024-11-27 13:02:27.857396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.619 [2024-11-27 13:02:27.857415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.619 [2024-11-27 13:02:27.857425] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.619 [2024-11-27 13:02:27.857433] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.619 [2024-11-27 13:02:27.867709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.619 qpair failed and we were unable to recover it. 00:24:01.619 [2024-11-27 13:02:27.877498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.619 [2024-11-27 13:02:27.877541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.619 [2024-11-27 13:02:27.877560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.619 [2024-11-27 13:02:27.877570] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.619 [2024-11-27 13:02:27.877579] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.619 [2024-11-27 13:02:27.887773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.619 qpair failed and we were unable to recover it. 00:24:01.619 [2024-11-27 13:02:27.897549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.620 [2024-11-27 13:02:27.897591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.620 [2024-11-27 13:02:27.897615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.620 [2024-11-27 13:02:27.897625] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.620 [2024-11-27 13:02:27.897634] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.620 [2024-11-27 13:02:27.907865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.620 qpair failed and we were unable to recover it. 00:24:01.620 [2024-11-27 13:02:27.917665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.620 [2024-11-27 13:02:27.917705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.620 [2024-11-27 13:02:27.917723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.620 [2024-11-27 13:02:27.917737] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.620 [2024-11-27 13:02:27.917745] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.620 [2024-11-27 13:02:27.927865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.620 qpair failed and we were unable to recover it. 00:24:01.620 [2024-11-27 13:02:27.937556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.620 [2024-11-27 13:02:27.937599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.620 [2024-11-27 13:02:27.937623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.620 [2024-11-27 13:02:27.937633] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.620 [2024-11-27 13:02:27.937642] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.620 [2024-11-27 13:02:27.948056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.620 qpair failed and we were unable to recover it. 00:24:01.620 [2024-11-27 13:02:27.957793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.620 [2024-11-27 13:02:27.957838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.620 [2024-11-27 13:02:27.957856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.620 [2024-11-27 13:02:27.957866] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.620 [2024-11-27 13:02:27.957875] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.620 [2024-11-27 13:02:27.967926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.620 qpair failed and we were unable to recover it. 00:24:01.620 [2024-11-27 13:02:27.977752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.620 [2024-11-27 13:02:27.977797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.620 [2024-11-27 13:02:27.977815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.620 [2024-11-27 13:02:27.977825] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.620 [2024-11-27 13:02:27.977833] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.620 [2024-11-27 13:02:27.988162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.620 qpair failed and we were unable to recover it. 00:24:01.620 [2024-11-27 13:02:27.997866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.620 [2024-11-27 13:02:27.997907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.620 [2024-11-27 13:02:27.997925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.620 [2024-11-27 13:02:27.997935] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.620 [2024-11-27 13:02:27.997944] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.879 [2024-11-27 13:02:28.007944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.879 qpair failed and we were unable to recover it. 00:24:01.879 [2024-11-27 13:02:28.017921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.879 [2024-11-27 13:02:28.017965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.879 [2024-11-27 13:02:28.017984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.879 [2024-11-27 13:02:28.017993] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.879 [2024-11-27 13:02:28.018002] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.879 [2024-11-27 13:02:28.028079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.879 qpair failed and we were unable to recover it. 00:24:01.879 [2024-11-27 13:02:28.037897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.879 [2024-11-27 13:02:28.037943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.879 [2024-11-27 13:02:28.037961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.879 [2024-11-27 13:02:28.037971] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.879 [2024-11-27 13:02:28.037980] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.879 [2024-11-27 13:02:28.048315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.879 qpair failed and we were unable to recover it. 00:24:01.879 [2024-11-27 13:02:28.058011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.879 [2024-11-27 13:02:28.058057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.879 [2024-11-27 13:02:28.058075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.879 [2024-11-27 13:02:28.058085] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.879 [2024-11-27 13:02:28.058094] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.879 [2024-11-27 13:02:28.068294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.879 qpair failed and we were unable to recover it. 00:24:01.879 [2024-11-27 13:02:28.078102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.879 [2024-11-27 13:02:28.078144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.879 [2024-11-27 13:02:28.078163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.879 [2024-11-27 13:02:28.078172] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.879 [2024-11-27 13:02:28.078181] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.879 [2024-11-27 13:02:28.088289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.879 qpair failed and we were unable to recover it. 00:24:01.879 [2024-11-27 13:02:28.098058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.879 [2024-11-27 13:02:28.098100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.879 [2024-11-27 13:02:28.098118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.879 [2024-11-27 13:02:28.098128] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.879 [2024-11-27 13:02:28.098136] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.879 [2024-11-27 13:02:28.108332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.879 qpair failed and we were unable to recover it. 00:24:01.879 [2024-11-27 13:02:28.118280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.879 [2024-11-27 13:02:28.118323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.879 [2024-11-27 13:02:28.118342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.879 [2024-11-27 13:02:28.118352] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.879 [2024-11-27 13:02:28.118360] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.879 [2024-11-27 13:02:28.128523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.879 qpair failed and we were unable to recover it. 00:24:01.879 [2024-11-27 13:02:28.138241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.879 [2024-11-27 13:02:28.138282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.879 [2024-11-27 13:02:28.138301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.879 [2024-11-27 13:02:28.138310] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.879 [2024-11-27 13:02:28.138319] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.879 [2024-11-27 13:02:28.148413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.879 qpair failed and we were unable to recover it. 00:24:01.879 [2024-11-27 13:02:28.158229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.879 [2024-11-27 13:02:28.158270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.879 [2024-11-27 13:02:28.158288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.879 [2024-11-27 13:02:28.158298] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.879 [2024-11-27 13:02:28.158307] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.879 [2024-11-27 13:02:28.168570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.879 qpair failed and we were unable to recover it. 00:24:01.879 [2024-11-27 13:02:28.178373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.879 [2024-11-27 13:02:28.178414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.879 [2024-11-27 13:02:28.178436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.879 [2024-11-27 13:02:28.178446] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.879 [2024-11-27 13:02:28.178455] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.879 [2024-11-27 13:02:28.188528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.879 qpair failed and we were unable to recover it. 00:24:01.879 [2024-11-27 13:02:28.198336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.879 [2024-11-27 13:02:28.198377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.879 [2024-11-27 13:02:28.198395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.879 [2024-11-27 13:02:28.198405] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.880 [2024-11-27 13:02:28.198413] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.880 [2024-11-27 13:02:28.208720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.880 qpair failed and we were unable to recover it. 00:24:01.880 [2024-11-27 13:02:28.218456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.880 [2024-11-27 13:02:28.218498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.880 [2024-11-27 13:02:28.218516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.880 [2024-11-27 13:02:28.218525] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.880 [2024-11-27 13:02:28.218534] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.880 [2024-11-27 13:02:28.228701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.880 qpair failed and we were unable to recover it. 00:24:01.880 [2024-11-27 13:02:28.238463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.880 [2024-11-27 13:02:28.238504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.880 [2024-11-27 13:02:28.238521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.880 [2024-11-27 13:02:28.238531] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.880 [2024-11-27 13:02:28.238540] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:01.880 [2024-11-27 13:02:28.248764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:01.880 qpair failed and we were unable to recover it. 00:24:01.880 [2024-11-27 13:02:28.258666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.880 [2024-11-27 13:02:28.258707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.880 [2024-11-27 13:02:28.258726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.880 [2024-11-27 13:02:28.258739] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.880 [2024-11-27 13:02:28.258748] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.139 [2024-11-27 13:02:28.268777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.139 qpair failed and we were unable to recover it. 00:24:02.139 [2024-11-27 13:02:28.278581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.139 [2024-11-27 13:02:28.278622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.139 [2024-11-27 13:02:28.278640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.139 [2024-11-27 13:02:28.278650] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.139 [2024-11-27 13:02:28.278658] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.139 [2024-11-27 13:02:28.289013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.139 qpair failed and we were unable to recover it. 00:24:02.139 [2024-11-27 13:02:28.298778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.139 [2024-11-27 13:02:28.298822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.139 [2024-11-27 13:02:28.298840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.139 [2024-11-27 13:02:28.298849] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.139 [2024-11-27 13:02:28.298858] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.139 [2024-11-27 13:02:28.309024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.139 qpair failed and we were unable to recover it. 00:24:02.139 [2024-11-27 13:02:28.318820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.139 [2024-11-27 13:02:28.318861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.139 [2024-11-27 13:02:28.318879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.139 [2024-11-27 13:02:28.318889] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.139 [2024-11-27 13:02:28.318898] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.139 [2024-11-27 13:02:28.329195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.139 qpair failed and we were unable to recover it. 00:24:02.139 [2024-11-27 13:02:28.338803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.139 [2024-11-27 13:02:28.338845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.139 [2024-11-27 13:02:28.338863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.139 [2024-11-27 13:02:28.338873] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.139 [2024-11-27 13:02:28.338881] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.139 [2024-11-27 13:02:28.349045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.139 qpair failed and we were unable to recover it. 00:24:02.139 [2024-11-27 13:02:28.358813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.139 [2024-11-27 13:02:28.358855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.139 [2024-11-27 13:02:28.358874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.139 [2024-11-27 13:02:28.358885] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.139 [2024-11-27 13:02:28.358895] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.139 [2024-11-27 13:02:28.369359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.139 qpair failed and we were unable to recover it. 00:24:02.139 [2024-11-27 13:02:28.378992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.139 [2024-11-27 13:02:28.379038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.139 [2024-11-27 13:02:28.379056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.139 [2024-11-27 13:02:28.379066] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.139 [2024-11-27 13:02:28.379075] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.139 [2024-11-27 13:02:28.389101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.139 qpair failed and we were unable to recover it. 00:24:02.139 [2024-11-27 13:02:28.398974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.139 [2024-11-27 13:02:28.399018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.139 [2024-11-27 13:02:28.399037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.139 [2024-11-27 13:02:28.399047] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.139 [2024-11-27 13:02:28.399056] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.139 [2024-11-27 13:02:28.409260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.139 qpair failed and we were unable to recover it. 00:24:02.139 [2024-11-27 13:02:28.419146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.139 [2024-11-27 13:02:28.419188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.139 [2024-11-27 13:02:28.419206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.139 [2024-11-27 13:02:28.419216] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.139 [2024-11-27 13:02:28.419225] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.139 [2024-11-27 13:02:28.429303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.139 qpair failed and we were unable to recover it. 00:24:02.139 [2024-11-27 13:02:28.439059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.139 [2024-11-27 13:02:28.439103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.139 [2024-11-27 13:02:28.439121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.139 [2024-11-27 13:02:28.439131] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.139 [2024-11-27 13:02:28.439140] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.140 [2024-11-27 13:02:28.449574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.140 qpair failed and we were unable to recover it. 00:24:02.140 [2024-11-27 13:02:28.459234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.140 [2024-11-27 13:02:28.459275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.140 [2024-11-27 13:02:28.459294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.140 [2024-11-27 13:02:28.459304] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.140 [2024-11-27 13:02:28.459314] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.140 [2024-11-27 13:02:28.469523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.140 qpair failed and we were unable to recover it. 00:24:02.140 [2024-11-27 13:02:28.479109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.140 [2024-11-27 13:02:28.479151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.140 [2024-11-27 13:02:28.479170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.140 [2024-11-27 13:02:28.479180] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.140 [2024-11-27 13:02:28.479188] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.140 [2024-11-27 13:02:28.489694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.140 qpair failed and we were unable to recover it. 00:24:02.140 [2024-11-27 13:02:28.499297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.140 [2024-11-27 13:02:28.499343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.140 [2024-11-27 13:02:28.499362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.140 [2024-11-27 13:02:28.499372] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.140 [2024-11-27 13:02:28.499381] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.140 [2024-11-27 13:02:28.509474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.140 qpair failed and we were unable to recover it. 00:24:02.140 [2024-11-27 13:02:28.519237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.140 [2024-11-27 13:02:28.519282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.140 [2024-11-27 13:02:28.519304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.140 [2024-11-27 13:02:28.519313] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.140 [2024-11-27 13:02:28.519323] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.399 [2024-11-27 13:02:28.529657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.399 qpair failed and we were unable to recover it. 00:24:02.399 [2024-11-27 13:02:28.539404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.399 [2024-11-27 13:02:28.539443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.399 [2024-11-27 13:02:28.539461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.399 [2024-11-27 13:02:28.539471] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.399 [2024-11-27 13:02:28.539480] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.399 [2024-11-27 13:02:28.549614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.399 qpair failed and we were unable to recover it. 00:24:02.399 [2024-11-27 13:02:28.559326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.399 [2024-11-27 13:02:28.559369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.399 [2024-11-27 13:02:28.559387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.399 [2024-11-27 13:02:28.559397] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.399 [2024-11-27 13:02:28.559406] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.399 [2024-11-27 13:02:28.569816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.399 qpair failed and we were unable to recover it. 00:24:02.399 [2024-11-27 13:02:28.579500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.399 [2024-11-27 13:02:28.579546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.399 [2024-11-27 13:02:28.579565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.399 [2024-11-27 13:02:28.579576] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.399 [2024-11-27 13:02:28.579585] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.399 [2024-11-27 13:02:28.589801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.399 qpair failed and we were unable to recover it. 00:24:02.399 [2024-11-27 13:02:28.599474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.399 [2024-11-27 13:02:28.599519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.399 [2024-11-27 13:02:28.599538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.399 [2024-11-27 13:02:28.599547] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.399 [2024-11-27 13:02:28.599562] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.399 [2024-11-27 13:02:28.610090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.399 qpair failed and we were unable to recover it. 00:24:02.399 [2024-11-27 13:02:28.619701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.399 [2024-11-27 13:02:28.619742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.399 [2024-11-27 13:02:28.619760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.399 [2024-11-27 13:02:28.619769] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.399 [2024-11-27 13:02:28.619778] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.399 [2024-11-27 13:02:28.629763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.399 qpair failed and we were unable to recover it. 00:24:02.399 [2024-11-27 13:02:28.639540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.399 [2024-11-27 13:02:28.639582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.399 [2024-11-27 13:02:28.639600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.399 [2024-11-27 13:02:28.639623] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.399 [2024-11-27 13:02:28.639633] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.399 [2024-11-27 13:02:28.649786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.399 qpair failed and we were unable to recover it. 00:24:02.399 [2024-11-27 13:02:28.659671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.399 [2024-11-27 13:02:28.659719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.399 [2024-11-27 13:02:28.659737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.399 [2024-11-27 13:02:28.659746] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.399 [2024-11-27 13:02:28.659755] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.399 [2024-11-27 13:02:28.669865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.399 qpair failed and we were unable to recover it. 00:24:02.399 [2024-11-27 13:02:28.679649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.399 [2024-11-27 13:02:28.679689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.399 [2024-11-27 13:02:28.679708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.399 [2024-11-27 13:02:28.679718] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.399 [2024-11-27 13:02:28.679727] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.399 [2024-11-27 13:02:28.689863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.399 qpair failed and we were unable to recover it. 00:24:02.399 [2024-11-27 13:02:28.699734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.399 [2024-11-27 13:02:28.699776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.399 [2024-11-27 13:02:28.699795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.399 [2024-11-27 13:02:28.699804] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.399 [2024-11-27 13:02:28.699813] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.399 [2024-11-27 13:02:28.710232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.399 qpair failed and we were unable to recover it. 00:24:02.399 [2024-11-27 13:02:28.719744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.400 [2024-11-27 13:02:28.719787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.400 [2024-11-27 13:02:28.719805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.400 [2024-11-27 13:02:28.719814] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.400 [2024-11-27 13:02:28.719824] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.400 [2024-11-27 13:02:28.730059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.400 qpair failed and we were unable to recover it. 00:24:02.400 [2024-11-27 13:02:28.739805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.400 [2024-11-27 13:02:28.739849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.400 [2024-11-27 13:02:28.739868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.400 [2024-11-27 13:02:28.739877] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.400 [2024-11-27 13:02:28.739886] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.400 [2024-11-27 13:02:28.749991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.400 qpair failed and we were unable to recover it. 00:24:02.400 [2024-11-27 13:02:28.759907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.400 [2024-11-27 13:02:28.759948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.400 [2024-11-27 13:02:28.759967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.400 [2024-11-27 13:02:28.759976] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.400 [2024-11-27 13:02:28.759984] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.400 [2024-11-27 13:02:28.770233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.400 qpair failed and we were unable to recover it. 00:24:02.400 [2024-11-27 13:02:28.780041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.400 [2024-11-27 13:02:28.780081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.400 [2024-11-27 13:02:28.780100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.400 [2024-11-27 13:02:28.780109] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.400 [2024-11-27 13:02:28.780118] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.659 [2024-11-27 13:02:28.790253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.659 qpair failed and we were unable to recover it. 00:24:02.659 [2024-11-27 13:02:28.799972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.659 [2024-11-27 13:02:28.800014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.659 [2024-11-27 13:02:28.800032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.659 [2024-11-27 13:02:28.800042] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.659 [2024-11-27 13:02:28.800051] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.659 [2024-11-27 13:02:28.810324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.659 qpair failed and we were unable to recover it. 00:24:02.659 [2024-11-27 13:02:28.820130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.659 [2024-11-27 13:02:28.820176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.659 [2024-11-27 13:02:28.820193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.659 [2024-11-27 13:02:28.820203] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.659 [2024-11-27 13:02:28.820212] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.659 [2024-11-27 13:02:28.830397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.659 qpair failed and we were unable to recover it. 00:24:02.659 [2024-11-27 13:02:28.840025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.659 [2024-11-27 13:02:28.840069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.659 [2024-11-27 13:02:28.840087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.659 [2024-11-27 13:02:28.840097] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.659 [2024-11-27 13:02:28.840106] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.659 [2024-11-27 13:02:28.850404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.659 qpair failed and we were unable to recover it. 00:24:02.659 [2024-11-27 13:02:28.860208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.659 [2024-11-27 13:02:28.860251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.659 [2024-11-27 13:02:28.860273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.659 [2024-11-27 13:02:28.860283] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.659 [2024-11-27 13:02:28.860292] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.659 [2024-11-27 13:02:28.870456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.659 qpair failed and we were unable to recover it. 00:24:02.659 [2024-11-27 13:02:28.880388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.659 [2024-11-27 13:02:28.880430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.659 [2024-11-27 13:02:28.880449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.659 [2024-11-27 13:02:28.880458] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.659 [2024-11-27 13:02:28.880467] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.659 [2024-11-27 13:02:28.890428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.659 qpair failed and we were unable to recover it. 00:24:02.659 [2024-11-27 13:02:28.900204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.659 [2024-11-27 13:02:28.900256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.659 [2024-11-27 13:02:28.900275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.659 [2024-11-27 13:02:28.900285] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.659 [2024-11-27 13:02:28.900294] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.659 [2024-11-27 13:02:28.910650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.659 qpair failed and we were unable to recover it. 00:24:02.659 [2024-11-27 13:02:28.920414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.659 [2024-11-27 13:02:28.920456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.659 [2024-11-27 13:02:28.920474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.659 [2024-11-27 13:02:28.920484] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.659 [2024-11-27 13:02:28.920493] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.659 [2024-11-27 13:02:28.930622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.659 qpair failed and we were unable to recover it. 00:24:02.659 [2024-11-27 13:02:28.940490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.659 [2024-11-27 13:02:28.940533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.659 [2024-11-27 13:02:28.940551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.659 [2024-11-27 13:02:28.940561] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.659 [2024-11-27 13:02:28.940573] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.659 [2024-11-27 13:02:28.950707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.659 qpair failed and we were unable to recover it. 00:24:02.659 [2024-11-27 13:02:28.960569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.659 [2024-11-27 13:02:28.960617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.659 [2024-11-27 13:02:28.960635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.659 [2024-11-27 13:02:28.960645] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.659 [2024-11-27 13:02:28.960654] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.659 [2024-11-27 13:02:28.970772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.659 qpair failed and we were unable to recover it. 00:24:02.659 [2024-11-27 13:02:28.980535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.659 [2024-11-27 13:02:28.980580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.659 [2024-11-27 13:02:28.980599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.659 [2024-11-27 13:02:28.980616] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.659 [2024-11-27 13:02:28.980625] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.659 [2024-11-27 13:02:28.990629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.659 qpair failed and we were unable to recover it. 00:24:02.659 [2024-11-27 13:02:29.000648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.660 [2024-11-27 13:02:29.000692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.660 [2024-11-27 13:02:29.000711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.660 [2024-11-27 13:02:29.000721] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.660 [2024-11-27 13:02:29.000729] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.660 [2024-11-27 13:02:29.010741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.660 qpair failed and we were unable to recover it. 00:24:02.660 [2024-11-27 13:02:29.020574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.660 [2024-11-27 13:02:29.020622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.660 [2024-11-27 13:02:29.020640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.660 [2024-11-27 13:02:29.020650] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.660 [2024-11-27 13:02:29.020659] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.660 [2024-11-27 13:02:29.030932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.660 qpair failed and we were unable to recover it. 00:24:02.660 [2024-11-27 13:02:29.040770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.660 [2024-11-27 13:02:29.040811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.660 [2024-11-27 13:02:29.040829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.660 [2024-11-27 13:02:29.040839] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.660 [2024-11-27 13:02:29.040848] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.924 [2024-11-27 13:02:29.051081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.925 qpair failed and we were unable to recover it. 00:24:02.925 [2024-11-27 13:02:29.060638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.925 [2024-11-27 13:02:29.060684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.925 [2024-11-27 13:02:29.060702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.925 [2024-11-27 13:02:29.060712] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.925 [2024-11-27 13:02:29.060721] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.925 [2024-11-27 13:02:29.071051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.925 qpair failed and we were unable to recover it. 00:24:02.925 [2024-11-27 13:02:29.080859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.925 [2024-11-27 13:02:29.080899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.925 [2024-11-27 13:02:29.080917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.925 [2024-11-27 13:02:29.080927] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.925 [2024-11-27 13:02:29.080936] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.925 [2024-11-27 13:02:29.091014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.925 qpair failed and we were unable to recover it. 00:24:02.925 [2024-11-27 13:02:29.100829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.925 [2024-11-27 13:02:29.100867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.925 [2024-11-27 13:02:29.100885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.925 [2024-11-27 13:02:29.100894] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.925 [2024-11-27 13:02:29.100904] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.925 [2024-11-27 13:02:29.111097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.925 qpair failed and we were unable to recover it. 00:24:02.925 [2024-11-27 13:02:29.120910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.925 [2024-11-27 13:02:29.120956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.925 [2024-11-27 13:02:29.120974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.925 [2024-11-27 13:02:29.120983] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.925 [2024-11-27 13:02:29.120992] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.925 [2024-11-27 13:02:29.131207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.925 qpair failed and we were unable to recover it. 00:24:02.925 [2024-11-27 13:02:29.140994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.925 [2024-11-27 13:02:29.141041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.925 [2024-11-27 13:02:29.141059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.925 [2024-11-27 13:02:29.141069] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.925 [2024-11-27 13:02:29.141077] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.925 [2024-11-27 13:02:29.151233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.925 qpair failed and we were unable to recover it. 00:24:02.925 [2024-11-27 13:02:29.160940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.925 [2024-11-27 13:02:29.160984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.925 [2024-11-27 13:02:29.161002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.925 [2024-11-27 13:02:29.161011] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.925 [2024-11-27 13:02:29.161020] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.925 [2024-11-27 13:02:29.171343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.925 qpair failed and we were unable to recover it. 00:24:02.925 [2024-11-27 13:02:29.181020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.926 [2024-11-27 13:02:29.181060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.926 [2024-11-27 13:02:29.181079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.926 [2024-11-27 13:02:29.181089] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.926 [2024-11-27 13:02:29.181097] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.926 [2024-11-27 13:02:29.191459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.926 qpair failed and we were unable to recover it. 00:24:02.926 [2024-11-27 13:02:29.201069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.926 [2024-11-27 13:02:29.201112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.926 [2024-11-27 13:02:29.201130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.926 [2024-11-27 13:02:29.201143] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.926 [2024-11-27 13:02:29.201153] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.926 [2024-11-27 13:02:29.211340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.926 qpair failed and we were unable to recover it. 00:24:02.926 [2024-11-27 13:02:29.221193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.926 [2024-11-27 13:02:29.221232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.926 [2024-11-27 13:02:29.221251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.926 [2024-11-27 13:02:29.221260] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.926 [2024-11-27 13:02:29.221269] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.926 [2024-11-27 13:02:29.231545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.926 qpair failed and we were unable to recover it. 00:24:02.926 [2024-11-27 13:02:29.241166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.926 [2024-11-27 13:02:29.241205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.926 [2024-11-27 13:02:29.241223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.926 [2024-11-27 13:02:29.241233] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.926 [2024-11-27 13:02:29.241242] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.926 [2024-11-27 13:02:29.251423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.926 qpair failed and we were unable to recover it. 00:24:02.926 [2024-11-27 13:02:29.261324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.926 [2024-11-27 13:02:29.261362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.926 [2024-11-27 13:02:29.261380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.926 [2024-11-27 13:02:29.261390] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.926 [2024-11-27 13:02:29.261399] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.926 [2024-11-27 13:02:29.271625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.926 qpair failed and we were unable to recover it. 00:24:02.926 [2024-11-27 13:02:29.281445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.926 [2024-11-27 13:02:29.281488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.926 [2024-11-27 13:02:29.281507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.926 [2024-11-27 13:02:29.281516] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.926 [2024-11-27 13:02:29.281528] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:02.926 [2024-11-27 13:02:29.291681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.926 qpair failed and we were unable to recover it. 00:24:02.926 [2024-11-27 13:02:29.301391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.926 [2024-11-27 13:02:29.301431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.926 [2024-11-27 13:02:29.301449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.926 [2024-11-27 13:02:29.301459] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.926 [2024-11-27 13:02:29.301468] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.182 [2024-11-27 13:02:29.311642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.182 qpair failed and we were unable to recover it. 00:24:03.182 [2024-11-27 13:02:29.321528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.182 [2024-11-27 13:02:29.321568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.182 [2024-11-27 13:02:29.321586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.182 [2024-11-27 13:02:29.321596] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.182 [2024-11-27 13:02:29.321604] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.182 [2024-11-27 13:02:29.331709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.182 qpair failed and we were unable to recover it. 00:24:03.182 [2024-11-27 13:02:29.341502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.182 [2024-11-27 13:02:29.341545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.182 [2024-11-27 13:02:29.341563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.182 [2024-11-27 13:02:29.341573] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.182 [2024-11-27 13:02:29.341582] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.183 [2024-11-27 13:02:29.351709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.183 qpair failed and we were unable to recover it. 00:24:03.183 [2024-11-27 13:02:29.361570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.183 [2024-11-27 13:02:29.361617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.183 [2024-11-27 13:02:29.361635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.183 [2024-11-27 13:02:29.361645] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.183 [2024-11-27 13:02:29.361654] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.183 [2024-11-27 13:02:29.371839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.183 qpair failed and we were unable to recover it. 00:24:03.183 [2024-11-27 13:02:29.381599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.183 [2024-11-27 13:02:29.381650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.183 [2024-11-27 13:02:29.381669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.183 [2024-11-27 13:02:29.381678] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.183 [2024-11-27 13:02:29.381688] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.183 [2024-11-27 13:02:29.391925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.183 qpair failed and we were unable to recover it. 00:24:03.183 [2024-11-27 13:02:29.401812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.183 [2024-11-27 13:02:29.401855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.183 [2024-11-27 13:02:29.401874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.183 [2024-11-27 13:02:29.401883] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.183 [2024-11-27 13:02:29.401892] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.183 [2024-11-27 13:02:29.412066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.183 qpair failed and we were unable to recover it. 00:24:03.183 [2024-11-27 13:02:29.421767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.183 [2024-11-27 13:02:29.421802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.183 [2024-11-27 13:02:29.421821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.183 [2024-11-27 13:02:29.421831] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.183 [2024-11-27 13:02:29.421840] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.183 [2024-11-27 13:02:29.431967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.183 qpair failed and we were unable to recover it. 00:24:03.183 [2024-11-27 13:02:29.441805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.183 [2024-11-27 13:02:29.441848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.183 [2024-11-27 13:02:29.441866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.183 [2024-11-27 13:02:29.441876] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.183 [2024-11-27 13:02:29.441884] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.183 [2024-11-27 13:02:29.452142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.183 qpair failed and we were unable to recover it. 00:24:03.183 [2024-11-27 13:02:29.461857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.183 [2024-11-27 13:02:29.461901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.183 [2024-11-27 13:02:29.461922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.183 [2024-11-27 13:02:29.461932] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.183 [2024-11-27 13:02:29.461940] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.183 [2024-11-27 13:02:29.472028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.183 qpair failed and we were unable to recover it. 00:24:03.183 [2024-11-27 13:02:29.482029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.183 [2024-11-27 13:02:29.482066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.183 [2024-11-27 13:02:29.482084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.183 [2024-11-27 13:02:29.482094] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.183 [2024-11-27 13:02:29.482102] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.183 [2024-11-27 13:02:29.492357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.183 qpair failed and we were unable to recover it. 00:24:03.183 [2024-11-27 13:02:29.501953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.183 [2024-11-27 13:02:29.501992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.183 [2024-11-27 13:02:29.502010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.183 [2024-11-27 13:02:29.502020] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.183 [2024-11-27 13:02:29.502029] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.183 [2024-11-27 13:02:29.512174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.183 qpair failed and we were unable to recover it. 00:24:03.183 [2024-11-27 13:02:29.521994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.183 [2024-11-27 13:02:29.522035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.183 [2024-11-27 13:02:29.522054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.183 [2024-11-27 13:02:29.522064] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.183 [2024-11-27 13:02:29.522072] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.183 [2024-11-27 13:02:29.532271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.183 qpair failed and we were unable to recover it. 00:24:03.183 [2024-11-27 13:02:29.542152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.183 [2024-11-27 13:02:29.542200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.183 [2024-11-27 13:02:29.542219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.183 [2024-11-27 13:02:29.542232] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.183 [2024-11-27 13:02:29.542241] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.183 [2024-11-27 13:02:29.552370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.183 qpair failed and we were unable to recover it. 00:24:03.183 [2024-11-27 13:02:29.562191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.183 [2024-11-27 13:02:29.562229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.183 [2024-11-27 13:02:29.562247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.183 [2024-11-27 13:02:29.562257] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.183 [2024-11-27 13:02:29.562266] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.441 [2024-11-27 13:02:29.572471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.441 qpair failed and we were unable to recover it. 00:24:03.441 [2024-11-27 13:02:29.582149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.441 [2024-11-27 13:02:29.582189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.441 [2024-11-27 13:02:29.582207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.441 [2024-11-27 13:02:29.582217] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.441 [2024-11-27 13:02:29.582226] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.441 [2024-11-27 13:02:29.592497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.442 qpair failed and we were unable to recover it. 00:24:03.442 [2024-11-27 13:02:29.602262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.442 [2024-11-27 13:02:29.602306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.442 [2024-11-27 13:02:29.602324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.442 [2024-11-27 13:02:29.602334] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.442 [2024-11-27 13:02:29.602343] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.442 [2024-11-27 13:02:29.612615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.442 qpair failed and we were unable to recover it. 00:24:03.442 [2024-11-27 13:02:29.622312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.442 [2024-11-27 13:02:29.622353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.442 [2024-11-27 13:02:29.622371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.442 [2024-11-27 13:02:29.622381] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.442 [2024-11-27 13:02:29.622390] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.442 [2024-11-27 13:02:29.632627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.442 qpair failed and we were unable to recover it. 00:24:03.442 [2024-11-27 13:02:29.642469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.442 [2024-11-27 13:02:29.642509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.442 [2024-11-27 13:02:29.642527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.442 [2024-11-27 13:02:29.642537] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.442 [2024-11-27 13:02:29.642546] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.442 [2024-11-27 13:02:29.652651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.442 qpair failed and we were unable to recover it. 00:24:03.442 [2024-11-27 13:02:29.662445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.442 [2024-11-27 13:02:29.662487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.442 [2024-11-27 13:02:29.662505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.442 [2024-11-27 13:02:29.662515] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.442 [2024-11-27 13:02:29.662523] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.442 [2024-11-27 13:02:29.672893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.442 qpair failed and we were unable to recover it. 00:24:03.442 [2024-11-27 13:02:29.682655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.442 [2024-11-27 13:02:29.682699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.442 [2024-11-27 13:02:29.682717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.442 [2024-11-27 13:02:29.682726] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.442 [2024-11-27 13:02:29.682735] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.442 [2024-11-27 13:02:29.692957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.442 qpair failed and we were unable to recover it. 00:24:03.442 [2024-11-27 13:02:29.702708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.442 [2024-11-27 13:02:29.702754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.442 [2024-11-27 13:02:29.702771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.442 [2024-11-27 13:02:29.702780] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.442 [2024-11-27 13:02:29.702789] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.442 [2024-11-27 13:02:29.712983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.442 qpair failed and we were unable to recover it. 00:24:03.442 [2024-11-27 13:02:29.722873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.442 [2024-11-27 13:02:29.722921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.442 [2024-11-27 13:02:29.722939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.442 [2024-11-27 13:02:29.722949] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.442 [2024-11-27 13:02:29.722958] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.442 [2024-11-27 13:02:29.733016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.442 qpair failed and we were unable to recover it. 00:24:03.442 [2024-11-27 13:02:29.742789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.442 [2024-11-27 13:02:29.742831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.442 [2024-11-27 13:02:29.742850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.442 [2024-11-27 13:02:29.742860] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.442 [2024-11-27 13:02:29.742869] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.442 [2024-11-27 13:02:29.753092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.442 qpair failed and we were unable to recover it. 00:24:03.442 [2024-11-27 13:02:29.765171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.442 [2024-11-27 13:02:29.765215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.442 [2024-11-27 13:02:29.765233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.442 [2024-11-27 13:02:29.765242] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.442 [2024-11-27 13:02:29.765251] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.442 [2024-11-27 13:02:29.773433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.442 qpair failed and we were unable to recover it. 00:24:03.442 [2024-11-27 13:02:29.783085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.442 [2024-11-27 13:02:29.783127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.442 [2024-11-27 13:02:29.783146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.442 [2024-11-27 13:02:29.783155] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.442 [2024-11-27 13:02:29.783164] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.442 [2024-11-27 13:02:29.793361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.442 qpair failed and we were unable to recover it. 00:24:03.442 [2024-11-27 13:02:29.803158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.442 [2024-11-27 13:02:29.803204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.442 [2024-11-27 13:02:29.803226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.442 [2024-11-27 13:02:29.803236] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.443 [2024-11-27 13:02:29.803245] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.443 [2024-11-27 13:02:29.813407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.443 qpair failed and we were unable to recover it. 00:24:03.443 [2024-11-27 13:02:29.823076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.443 [2024-11-27 13:02:29.823112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.443 [2024-11-27 13:02:29.823131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.443 [2024-11-27 13:02:29.823141] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.443 [2024-11-27 13:02:29.823149] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.702 [2024-11-27 13:02:29.833451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.702 qpair failed and we were unable to recover it. 00:24:03.702 [2024-11-27 13:02:29.843289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.702 [2024-11-27 13:02:29.843332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.702 [2024-11-27 13:02:29.843350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.702 [2024-11-27 13:02:29.843359] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.702 [2024-11-27 13:02:29.843368] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.702 [2024-11-27 13:02:29.853531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.702 qpair failed and we were unable to recover it. 00:24:03.702 [2024-11-27 13:02:29.863201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.702 [2024-11-27 13:02:29.863248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.702 [2024-11-27 13:02:29.863266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.702 [2024-11-27 13:02:29.863276] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.702 [2024-11-27 13:02:29.863285] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.702 [2024-11-27 13:02:29.873544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.702 qpair failed and we were unable to recover it. 00:24:03.702 [2024-11-27 13:02:29.883274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.702 [2024-11-27 13:02:29.883316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.702 [2024-11-27 13:02:29.883334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.702 [2024-11-27 13:02:29.883347] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.702 [2024-11-27 13:02:29.883356] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.702 [2024-11-27 13:02:29.893616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.702 qpair failed and we were unable to recover it. 00:24:03.702 [2024-11-27 13:02:29.903259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.702 [2024-11-27 13:02:29.903296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.702 [2024-11-27 13:02:29.903314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.702 [2024-11-27 13:02:29.903324] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.702 [2024-11-27 13:02:29.903333] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.702 [2024-11-27 13:02:29.913763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.702 qpair failed and we were unable to recover it. 00:24:03.702 [2024-11-27 13:02:29.923478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.702 [2024-11-27 13:02:29.923518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.702 [2024-11-27 13:02:29.923537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.702 [2024-11-27 13:02:29.923547] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.702 [2024-11-27 13:02:29.923556] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.702 [2024-11-27 13:02:29.933766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.702 qpair failed and we were unable to recover it. 00:24:03.702 [2024-11-27 13:02:29.943383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.702 [2024-11-27 13:02:29.943428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.702 [2024-11-27 13:02:29.943446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.702 [2024-11-27 13:02:29.943456] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.702 [2024-11-27 13:02:29.943464] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.702 [2024-11-27 13:02:29.953834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.702 qpair failed and we were unable to recover it. 00:24:03.702 [2024-11-27 13:02:29.963529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.702 [2024-11-27 13:02:29.963572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.702 [2024-11-27 13:02:29.963589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.702 [2024-11-27 13:02:29.963599] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.702 [2024-11-27 13:02:29.963612] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.702 [2024-11-27 13:02:29.973904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.702 qpair failed and we were unable to recover it. 00:24:03.702 [2024-11-27 13:02:29.983641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.702 [2024-11-27 13:02:29.983682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.702 [2024-11-27 13:02:29.983700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.702 [2024-11-27 13:02:29.983710] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.702 [2024-11-27 13:02:29.983718] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.702 [2024-11-27 13:02:29.993886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.702 qpair failed and we were unable to recover it. 00:24:03.702 [2024-11-27 13:02:30.003704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.702 [2024-11-27 13:02:30.003763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.703 [2024-11-27 13:02:30.003781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.703 [2024-11-27 13:02:30.003791] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.703 [2024-11-27 13:02:30.003800] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.703 [2024-11-27 13:02:30.014322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.703 qpair failed and we were unable to recover it. 00:24:03.703 [2024-11-27 13:02:30.024013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.703 [2024-11-27 13:02:30.024058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.703 [2024-11-27 13:02:30.024077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.703 [2024-11-27 13:02:30.024087] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.703 [2024-11-27 13:02:30.024097] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.703 [2024-11-27 13:02:30.033966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.703 qpair failed and we were unable to recover it. 00:24:03.703 [2024-11-27 13:02:30.043737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.703 [2024-11-27 13:02:30.043782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.703 [2024-11-27 13:02:30.043801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.703 [2024-11-27 13:02:30.043811] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.703 [2024-11-27 13:02:30.043820] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:03.703 [2024-11-27 13:02:30.054156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:03.703 qpair failed and we were unable to recover it. 00:24:03.703 [2024-11-27 13:02:30.063926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.703 [2024-11-27 13:02:30.063978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.703 [2024-11-27 13:02:30.064005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.703 [2024-11-27 13:02:30.064019] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.703 [2024-11-27 13:02:30.064031] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:03.703 [2024-11-27 13:02:30.074104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:03.703 qpair failed and we were unable to recover it. 00:24:03.703 [2024-11-27 13:02:30.083710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.703 [2024-11-27 13:02:30.083753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.703 [2024-11-27 13:02:30.083772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.703 [2024-11-27 13:02:30.083782] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.703 [2024-11-27 13:02:30.083790] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:03.962 [2024-11-27 13:02:30.094057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:03.962 qpair failed and we were unable to recover it. 00:24:03.962 [2024-11-27 13:02:30.103827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.962 [2024-11-27 13:02:30.103869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.962 [2024-11-27 13:02:30.103888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.962 [2024-11-27 13:02:30.103897] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.962 [2024-11-27 13:02:30.103906] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:03.962 [2024-11-27 13:02:30.114202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:03.962 qpair failed and we were unable to recover it. 00:24:03.962 [2024-11-27 13:02:30.123923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.962 [2024-11-27 13:02:30.123962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.962 [2024-11-27 13:02:30.123981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.962 [2024-11-27 13:02:30.123990] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.962 [2024-11-27 13:02:30.123999] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:03.962 [2024-11-27 13:02:30.134195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:03.962 qpair failed and we were unable to recover it. 00:24:03.962 [2024-11-27 13:02:30.143987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.962 [2024-11-27 13:02:30.144027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.962 [2024-11-27 13:02:30.144049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.962 [2024-11-27 13:02:30.144059] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.962 [2024-11-27 13:02:30.144068] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:03.962 [2024-11-27 13:02:30.154322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:03.962 qpair failed and we were unable to recover it. 00:24:03.962 [2024-11-27 13:02:30.164073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.962 [2024-11-27 13:02:30.164114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.962 [2024-11-27 13:02:30.164132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.962 [2024-11-27 13:02:30.164142] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.962 [2024-11-27 13:02:30.164150] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:03.962 [2024-11-27 13:02:30.174456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:03.962 qpair failed and we were unable to recover it. 00:24:03.962 [2024-11-27 13:02:30.184212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.962 [2024-11-27 13:02:30.184255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.962 [2024-11-27 13:02:30.184274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.962 [2024-11-27 13:02:30.184283] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.962 [2024-11-27 13:02:30.184292] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:03.962 [2024-11-27 13:02:30.194350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:03.962 qpair failed and we were unable to recover it. 00:24:03.962 [2024-11-27 13:02:30.204184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.962 [2024-11-27 13:02:30.204223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.962 [2024-11-27 13:02:30.204241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.962 [2024-11-27 13:02:30.204251] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.962 [2024-11-27 13:02:30.204260] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:03.962 [2024-11-27 13:02:30.214381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:03.962 qpair failed and we were unable to recover it. 00:24:03.962 [2024-11-27 13:02:30.224235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.962 [2024-11-27 13:02:30.224273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.962 [2024-11-27 13:02:30.224292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.962 [2024-11-27 13:02:30.224301] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.962 [2024-11-27 13:02:30.224317] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:03.962 [2024-11-27 13:02:30.234542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:03.962 qpair failed and we were unable to recover it. 00:24:03.962 [2024-11-27 13:02:30.244339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.962 [2024-11-27 13:02:30.244381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.962 [2024-11-27 13:02:30.244399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.962 [2024-11-27 13:02:30.244409] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.962 [2024-11-27 13:02:30.244418] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:03.962 [2024-11-27 13:02:30.254600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:03.962 qpair failed and we were unable to recover it. 00:24:03.962 [2024-11-27 13:02:30.264336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.962 [2024-11-27 13:02:30.264377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.962 [2024-11-27 13:02:30.264395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.962 [2024-11-27 13:02:30.264405] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.962 [2024-11-27 13:02:30.264413] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:03.962 [2024-11-27 13:02:30.274629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:03.962 qpair failed and we were unable to recover it. 00:24:03.962 [2024-11-27 13:02:30.284401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.962 [2024-11-27 13:02:30.284437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.962 [2024-11-27 13:02:30.284455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.962 [2024-11-27 13:02:30.284465] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.962 [2024-11-27 13:02:30.284474] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:03.962 [2024-11-27 13:02:30.294891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:03.962 qpair failed and we were unable to recover it. 00:24:03.962 [2024-11-27 13:02:30.304436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.962 [2024-11-27 13:02:30.304472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.962 [2024-11-27 13:02:30.304491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.962 [2024-11-27 13:02:30.304500] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.962 [2024-11-27 13:02:30.304509] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:03.962 [2024-11-27 13:02:30.314594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:03.962 qpair failed and we were unable to recover it. 00:24:03.962 [2024-11-27 13:02:30.324402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.962 [2024-11-27 13:02:30.324444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.962 [2024-11-27 13:02:30.324462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.963 [2024-11-27 13:02:30.324471] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.963 [2024-11-27 13:02:30.324480] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:03.963 [2024-11-27 13:02:30.334733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:03.963 qpair failed and we were unable to recover it. 00:24:03.963 [2024-11-27 13:02:30.344695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:03.963 [2024-11-27 13:02:30.344739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:03.963 [2024-11-27 13:02:30.344758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:03.963 [2024-11-27 13:02:30.344767] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:03.963 [2024-11-27 13:02:30.344776] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.220 [2024-11-27 13:02:30.354972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.220 qpair failed and we were unable to recover it. 00:24:04.220 [2024-11-27 13:02:30.364689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.220 [2024-11-27 13:02:30.364733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.220 [2024-11-27 13:02:30.364752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.220 [2024-11-27 13:02:30.364761] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.220 [2024-11-27 13:02:30.364770] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.220 [2024-11-27 13:02:30.374916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.220 qpair failed and we were unable to recover it. 00:24:04.220 [2024-11-27 13:02:30.384747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.220 [2024-11-27 13:02:30.384792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.220 [2024-11-27 13:02:30.384810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.220 [2024-11-27 13:02:30.384819] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.220 [2024-11-27 13:02:30.384829] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.220 [2024-11-27 13:02:30.395090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.220 qpair failed and we were unable to recover it. 00:24:04.220 [2024-11-27 13:02:30.404694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.220 [2024-11-27 13:02:30.404740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.220 [2024-11-27 13:02:30.404758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.220 [2024-11-27 13:02:30.404768] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.220 [2024-11-27 13:02:30.404777] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.220 [2024-11-27 13:02:30.415197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.220 qpair failed and we were unable to recover it. 00:24:04.220 [2024-11-27 13:02:30.424936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.220 [2024-11-27 13:02:30.424976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.220 [2024-11-27 13:02:30.424994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.220 [2024-11-27 13:02:30.425004] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.220 [2024-11-27 13:02:30.425013] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.220 [2024-11-27 13:02:30.435242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.220 qpair failed and we were unable to recover it. 00:24:04.220 [2024-11-27 13:02:30.445028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.220 [2024-11-27 13:02:30.445074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.220 [2024-11-27 13:02:30.445091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.220 [2024-11-27 13:02:30.445101] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.220 [2024-11-27 13:02:30.445110] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.220 [2024-11-27 13:02:30.455265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.220 qpair failed and we were unable to recover it. 00:24:04.220 [2024-11-27 13:02:30.464989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.220 [2024-11-27 13:02:30.465032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.220 [2024-11-27 13:02:30.465051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.220 [2024-11-27 13:02:30.465060] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.220 [2024-11-27 13:02:30.465070] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.220 [2024-11-27 13:02:30.475239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.220 qpair failed and we were unable to recover it. 00:24:04.220 [2024-11-27 13:02:30.485126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.220 [2024-11-27 13:02:30.485168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.220 [2024-11-27 13:02:30.485189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.220 [2024-11-27 13:02:30.485199] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.220 [2024-11-27 13:02:30.485207] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.220 [2024-11-27 13:02:30.495371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.220 qpair failed and we were unable to recover it. 00:24:04.220 [2024-11-27 13:02:30.505107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.220 [2024-11-27 13:02:30.505153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.220 [2024-11-27 13:02:30.505171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.220 [2024-11-27 13:02:30.505181] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.220 [2024-11-27 13:02:30.505190] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.220 [2024-11-27 13:02:30.515258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.220 qpair failed and we were unable to recover it. 00:24:04.220 [2024-11-27 13:02:30.525176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.220 [2024-11-27 13:02:30.525216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.220 [2024-11-27 13:02:30.525235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.220 [2024-11-27 13:02:30.525245] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.220 [2024-11-27 13:02:30.525254] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.220 [2024-11-27 13:02:30.535371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.220 qpair failed and we were unable to recover it. 00:24:04.220 [2024-11-27 13:02:30.545157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.220 [2024-11-27 13:02:30.545199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.220 [2024-11-27 13:02:30.545217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.220 [2024-11-27 13:02:30.545227] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.220 [2024-11-27 13:02:30.545236] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.220 [2024-11-27 13:02:30.555541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.220 qpair failed and we were unable to recover it. 00:24:04.220 [2024-11-27 13:02:30.565293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.220 [2024-11-27 13:02:30.565336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.220 [2024-11-27 13:02:30.565355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.220 [2024-11-27 13:02:30.565364] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.220 [2024-11-27 13:02:30.565377] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.220 [2024-11-27 13:02:30.575538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.220 qpair failed and we were unable to recover it. 00:24:04.220 [2024-11-27 13:02:30.585338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.220 [2024-11-27 13:02:30.585378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.220 [2024-11-27 13:02:30.585396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.221 [2024-11-27 13:02:30.585406] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.221 [2024-11-27 13:02:30.585415] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.221 [2024-11-27 13:02:30.595661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.221 qpair failed and we were unable to recover it. 00:24:04.478 [2024-11-27 13:02:30.605358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.478 [2024-11-27 13:02:30.605403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.478 [2024-11-27 13:02:30.605422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.478 [2024-11-27 13:02:30.605431] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.478 [2024-11-27 13:02:30.605440] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.478 [2024-11-27 13:02:30.615707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.478 qpair failed and we were unable to recover it. 00:24:04.478 [2024-11-27 13:02:30.625382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.478 [2024-11-27 13:02:30.625424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.478 [2024-11-27 13:02:30.625443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.478 [2024-11-27 13:02:30.625452] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.478 [2024-11-27 13:02:30.625461] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.478 [2024-11-27 13:02:30.635771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.478 qpair failed and we were unable to recover it. 00:24:04.478 [2024-11-27 13:02:30.645593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.478 [2024-11-27 13:02:30.645640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.478 [2024-11-27 13:02:30.645659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.478 [2024-11-27 13:02:30.645668] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.478 [2024-11-27 13:02:30.645677] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.478 [2024-11-27 13:02:30.655541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.478 qpair failed and we were unable to recover it. 00:24:04.478 [2024-11-27 13:02:30.665439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.478 [2024-11-27 13:02:30.665483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.478 [2024-11-27 13:02:30.665501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.478 [2024-11-27 13:02:30.665511] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.478 [2024-11-27 13:02:30.665520] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.478 [2024-11-27 13:02:30.675776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.478 qpair failed and we were unable to recover it. 00:24:04.478 [2024-11-27 13:02:30.685445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.478 [2024-11-27 13:02:30.685492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.478 [2024-11-27 13:02:30.685511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.478 [2024-11-27 13:02:30.685521] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.478 [2024-11-27 13:02:30.685530] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.478 [2024-11-27 13:02:30.695912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.478 qpair failed and we were unable to recover it. 00:24:04.478 [2024-11-27 13:02:30.705558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.478 [2024-11-27 13:02:30.705601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.478 [2024-11-27 13:02:30.705624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.478 [2024-11-27 13:02:30.705634] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.478 [2024-11-27 13:02:30.705643] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.478 [2024-11-27 13:02:30.715909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.478 qpair failed and we were unable to recover it. 00:24:04.478 [2024-11-27 13:02:30.725728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.478 [2024-11-27 13:02:30.725771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.478 [2024-11-27 13:02:30.725789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.478 [2024-11-27 13:02:30.725799] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.478 [2024-11-27 13:02:30.725808] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.478 [2024-11-27 13:02:30.736021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.478 qpair failed and we were unable to recover it. 00:24:04.478 [2024-11-27 13:02:30.745643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.478 [2024-11-27 13:02:30.745688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.478 [2024-11-27 13:02:30.745706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.478 [2024-11-27 13:02:30.745716] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.478 [2024-11-27 13:02:30.745725] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.479 [2024-11-27 13:02:30.755881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.479 qpair failed and we were unable to recover it. 00:24:04.479 [2024-11-27 13:02:30.765794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.479 [2024-11-27 13:02:30.765834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.479 [2024-11-27 13:02:30.765852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.479 [2024-11-27 13:02:30.765862] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.479 [2024-11-27 13:02:30.765871] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.479 [2024-11-27 13:02:30.776012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.479 qpair failed and we were unable to recover it. 00:24:04.479 [2024-11-27 13:02:30.785736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.479 [2024-11-27 13:02:30.785781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.479 [2024-11-27 13:02:30.785799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.479 [2024-11-27 13:02:30.785809] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.479 [2024-11-27 13:02:30.785818] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.479 [2024-11-27 13:02:30.795882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.479 qpair failed and we were unable to recover it. 00:24:04.479 [2024-11-27 13:02:30.805873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.479 [2024-11-27 13:02:30.805916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.479 [2024-11-27 13:02:30.805934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.479 [2024-11-27 13:02:30.805943] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.479 [2024-11-27 13:02:30.805952] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.479 [2024-11-27 13:02:30.816184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.479 qpair failed and we were unable to recover it. 00:24:04.479 [2024-11-27 13:02:30.825943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.479 [2024-11-27 13:02:30.825982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.479 [2024-11-27 13:02:30.826004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.479 [2024-11-27 13:02:30.826013] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.479 [2024-11-27 13:02:30.826022] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.479 [2024-11-27 13:02:30.836138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.479 qpair failed and we were unable to recover it. 00:24:04.479 [2024-11-27 13:02:30.846120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.479 [2024-11-27 13:02:30.846164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.479 [2024-11-27 13:02:30.846182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.479 [2024-11-27 13:02:30.846192] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.479 [2024-11-27 13:02:30.846201] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.479 [2024-11-27 13:02:30.856298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.479 qpair failed and we were unable to recover it. 00:24:04.737 [2024-11-27 13:02:30.866097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.737 [2024-11-27 13:02:30.866136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.737 [2024-11-27 13:02:30.866154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.737 [2024-11-27 13:02:30.866164] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.737 [2024-11-27 13:02:30.866173] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.737 [2024-11-27 13:02:30.876348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.737 qpair failed and we were unable to recover it. 00:24:04.737 [2024-11-27 13:02:30.886033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.737 [2024-11-27 13:02:30.886072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.737 [2024-11-27 13:02:30.886090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.737 [2024-11-27 13:02:30.886100] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.737 [2024-11-27 13:02:30.886109] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.737 [2024-11-27 13:02:30.896334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.737 qpair failed and we were unable to recover it. 00:24:04.737 [2024-11-27 13:02:30.906093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.737 [2024-11-27 13:02:30.906140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.737 [2024-11-27 13:02:30.906158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.737 [2024-11-27 13:02:30.906167] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.737 [2024-11-27 13:02:30.906180] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.737 [2024-11-27 13:02:30.916558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.737 qpair failed and we were unable to recover it. 00:24:04.737 [2024-11-27 13:02:30.926207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.737 [2024-11-27 13:02:30.926251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.737 [2024-11-27 13:02:30.926270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.737 [2024-11-27 13:02:30.926280] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.737 [2024-11-27 13:02:30.926289] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.737 [2024-11-27 13:02:30.936533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.737 qpair failed and we were unable to recover it. 00:24:04.737 [2024-11-27 13:02:30.946240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.737 [2024-11-27 13:02:30.946283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.737 [2024-11-27 13:02:30.946302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.737 [2024-11-27 13:02:30.946312] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.737 [2024-11-27 13:02:30.946321] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.737 [2024-11-27 13:02:30.956540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.737 qpair failed and we were unable to recover it. 00:24:04.737 [2024-11-27 13:02:30.966353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.737 [2024-11-27 13:02:30.966393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.737 [2024-11-27 13:02:30.966411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.737 [2024-11-27 13:02:30.966421] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.737 [2024-11-27 13:02:30.966430] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.737 [2024-11-27 13:02:30.976631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.737 qpair failed and we were unable to recover it. 00:24:04.737 [2024-11-27 13:02:30.986386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.737 [2024-11-27 13:02:30.986427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.737 [2024-11-27 13:02:30.986445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.737 [2024-11-27 13:02:30.986455] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.737 [2024-11-27 13:02:30.986464] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.737 [2024-11-27 13:02:30.996530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.737 qpair failed and we were unable to recover it. 00:24:04.737 [2024-11-27 13:02:31.006559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.737 [2024-11-27 13:02:31.006598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.737 [2024-11-27 13:02:31.006621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.737 [2024-11-27 13:02:31.006631] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.737 [2024-11-27 13:02:31.006641] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.737 [2024-11-27 13:02:31.016728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.737 qpair failed and we were unable to recover it. 00:24:04.737 [2024-11-27 13:02:31.026462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.737 [2024-11-27 13:02:31.026504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.737 [2024-11-27 13:02:31.026522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.737 [2024-11-27 13:02:31.026531] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.737 [2024-11-27 13:02:31.026540] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.737 [2024-11-27 13:02:31.036758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.737 qpair failed and we were unable to recover it. 00:24:04.737 [2024-11-27 13:02:31.046605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.737 [2024-11-27 13:02:31.046653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.737 [2024-11-27 13:02:31.046671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.737 [2024-11-27 13:02:31.046681] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.737 [2024-11-27 13:02:31.046690] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.737 [2024-11-27 13:02:31.056783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.737 qpair failed and we were unable to recover it. 00:24:04.737 [2024-11-27 13:02:31.066649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.737 [2024-11-27 13:02:31.066696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.737 [2024-11-27 13:02:31.066714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.737 [2024-11-27 13:02:31.066724] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.737 [2024-11-27 13:02:31.066733] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.737 [2024-11-27 13:02:31.076810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.737 qpair failed and we were unable to recover it. 00:24:04.737 [2024-11-27 13:02:31.086563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.737 [2024-11-27 13:02:31.086613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.737 [2024-11-27 13:02:31.086634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.737 [2024-11-27 13:02:31.086644] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.737 [2024-11-27 13:02:31.086654] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.737 [2024-11-27 13:02:31.097015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.737 qpair failed and we were unable to recover it. 00:24:04.737 [2024-11-27 13:02:31.106761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.737 [2024-11-27 13:02:31.106803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.737 [2024-11-27 13:02:31.106821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.737 [2024-11-27 13:02:31.106831] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.737 [2024-11-27 13:02:31.106839] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.737 [2024-11-27 13:02:31.116912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.737 qpair failed and we were unable to recover it. 00:24:04.995 [2024-11-27 13:02:31.126788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.995 [2024-11-27 13:02:31.126829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.995 [2024-11-27 13:02:31.126847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.995 [2024-11-27 13:02:31.126856] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.995 [2024-11-27 13:02:31.126865] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.995 [2024-11-27 13:02:31.137144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.995 qpair failed and we were unable to recover it. 00:24:04.995 [2024-11-27 13:02:31.146936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.995 [2024-11-27 13:02:31.146978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.995 [2024-11-27 13:02:31.146997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.995 [2024-11-27 13:02:31.147006] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.995 [2024-11-27 13:02:31.147015] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.995 [2024-11-27 13:02:31.157029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.995 qpair failed and we were unable to recover it. 00:24:04.995 [2024-11-27 13:02:31.166946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.995 [2024-11-27 13:02:31.166987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.995 [2024-11-27 13:02:31.167005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.995 [2024-11-27 13:02:31.167018] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.995 [2024-11-27 13:02:31.167027] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.995 [2024-11-27 13:02:31.177084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.995 qpair failed and we were unable to recover it. 00:24:04.995 [2024-11-27 13:02:31.187045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.995 [2024-11-27 13:02:31.187081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.995 [2024-11-27 13:02:31.187099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.995 [2024-11-27 13:02:31.187109] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.995 [2024-11-27 13:02:31.187118] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.995 [2024-11-27 13:02:31.197256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.995 qpair failed and we were unable to recover it. 00:24:04.995 [2024-11-27 13:02:31.207051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.995 [2024-11-27 13:02:31.207092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.995 [2024-11-27 13:02:31.207110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.995 [2024-11-27 13:02:31.207120] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.995 [2024-11-27 13:02:31.207128] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.995 [2024-11-27 13:02:31.217342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.995 qpair failed and we were unable to recover it. 00:24:04.995 [2024-11-27 13:02:31.227077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.995 [2024-11-27 13:02:31.227120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.995 [2024-11-27 13:02:31.227137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.995 [2024-11-27 13:02:31.227147] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.995 [2024-11-27 13:02:31.227156] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.995 [2024-11-27 13:02:31.237347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.995 qpair failed and we were unable to recover it. 00:24:04.995 [2024-11-27 13:02:31.247138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.995 [2024-11-27 13:02:31.247180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.995 [2024-11-27 13:02:31.247199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.995 [2024-11-27 13:02:31.247209] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.995 [2024-11-27 13:02:31.247218] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:04.995 [2024-11-27 13:02:31.257454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.995 qpair failed and we were unable to recover it. 00:24:04.995 [2024-11-27 13:02:31.267352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.995 [2024-11-27 13:02:31.267400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.995 [2024-11-27 13:02:31.267427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.995 [2024-11-27 13:02:31.267441] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.995 [2024-11-27 13:02:31.267453] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:04.995 [2024-11-27 13:02:31.277503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:04.995 qpair failed and we were unable to recover it. 00:24:04.995 [2024-11-27 13:02:31.287228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.995 [2024-11-27 13:02:31.287269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.995 [2024-11-27 13:02:31.287289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.995 [2024-11-27 13:02:31.287299] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.995 [2024-11-27 13:02:31.287307] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:04.995 [2024-11-27 13:02:31.297594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:04.995 qpair failed and we were unable to recover it. 00:24:04.995 [2024-11-27 13:02:31.307434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.995 [2024-11-27 13:02:31.307475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.995 [2024-11-27 13:02:31.307494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.995 [2024-11-27 13:02:31.307504] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.995 [2024-11-27 13:02:31.307513] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:04.995 [2024-11-27 13:02:31.317470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:04.995 qpair failed and we were unable to recover it. 00:24:04.995 [2024-11-27 13:02:31.327417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.995 [2024-11-27 13:02:31.327462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.995 [2024-11-27 13:02:31.327480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.995 [2024-11-27 13:02:31.327490] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.995 [2024-11-27 13:02:31.327499] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:04.995 [2024-11-27 13:02:31.337500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:04.996 qpair failed and we were unable to recover it. 00:24:04.996 [2024-11-27 13:02:31.347605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.996 [2024-11-27 13:02:31.347653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.996 [2024-11-27 13:02:31.347671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.996 [2024-11-27 13:02:31.347680] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.996 [2024-11-27 13:02:31.347689] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:04.996 [2024-11-27 13:02:31.357725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:04.996 qpair failed and we were unable to recover it. 00:24:04.996 [2024-11-27 13:02:31.367706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:04.996 [2024-11-27 13:02:31.367746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:04.996 [2024-11-27 13:02:31.367764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:04.996 [2024-11-27 13:02:31.367774] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:04.996 [2024-11-27 13:02:31.367782] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:04.996 [2024-11-27 13:02:31.377813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:04.996 qpair failed and we were unable to recover it. 00:24:05.253 [2024-11-27 13:02:31.387545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.253 [2024-11-27 13:02:31.387588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.253 [2024-11-27 13:02:31.387606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.253 [2024-11-27 13:02:31.387621] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.253 [2024-11-27 13:02:31.387629] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.253 [2024-11-27 13:02:31.397816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.253 qpair failed and we were unable to recover it. 00:24:05.253 [2024-11-27 13:02:31.407682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.254 [2024-11-27 13:02:31.407720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.254 [2024-11-27 13:02:31.407739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.254 [2024-11-27 13:02:31.407748] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.254 [2024-11-27 13:02:31.407757] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.254 [2024-11-27 13:02:31.417866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.254 qpair failed and we were unable to recover it. 00:24:05.254 [2024-11-27 13:02:31.427767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.254 [2024-11-27 13:02:31.427810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.254 [2024-11-27 13:02:31.427832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.254 [2024-11-27 13:02:31.427842] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.254 [2024-11-27 13:02:31.427851] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.254 [2024-11-27 13:02:31.437908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.254 qpair failed and we were unable to recover it. 00:24:05.254 [2024-11-27 13:02:31.447858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.254 [2024-11-27 13:02:31.447900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.254 [2024-11-27 13:02:31.447919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.254 [2024-11-27 13:02:31.447929] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.254 [2024-11-27 13:02:31.447938] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.254 [2024-11-27 13:02:31.457944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.254 qpair failed and we were unable to recover it. 00:24:05.254 [2024-11-27 13:02:31.467819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.254 [2024-11-27 13:02:31.467858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.254 [2024-11-27 13:02:31.467877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.254 [2024-11-27 13:02:31.467886] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.254 [2024-11-27 13:02:31.467895] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.254 [2024-11-27 13:02:31.478066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.254 qpair failed and we were unable to recover it. 00:24:05.254 [2024-11-27 13:02:31.488001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.254 [2024-11-27 13:02:31.488038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.254 [2024-11-27 13:02:31.488057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.254 [2024-11-27 13:02:31.488067] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.254 [2024-11-27 13:02:31.488075] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.254 [2024-11-27 13:02:31.498241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.254 qpair failed and we were unable to recover it. 00:24:05.254 [2024-11-27 13:02:31.508059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.254 [2024-11-27 13:02:31.508100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.254 [2024-11-27 13:02:31.508119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.254 [2024-11-27 13:02:31.508132] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.254 [2024-11-27 13:02:31.508141] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.254 [2024-11-27 13:02:31.518370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.254 qpair failed and we were unable to recover it. 00:24:05.254 [2024-11-27 13:02:31.528128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.254 [2024-11-27 13:02:31.528168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.254 [2024-11-27 13:02:31.528187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.254 [2024-11-27 13:02:31.528196] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.254 [2024-11-27 13:02:31.528205] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.254 [2024-11-27 13:02:31.538177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.254 qpair failed and we were unable to recover it. 00:24:05.254 [2024-11-27 13:02:31.548097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.254 [2024-11-27 13:02:31.548145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.254 [2024-11-27 13:02:31.548164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.254 [2024-11-27 13:02:31.548174] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.254 [2024-11-27 13:02:31.548182] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.254 [2024-11-27 13:02:31.558319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.254 qpair failed and we were unable to recover it. 00:24:05.254 [2024-11-27 13:02:31.568156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.254 [2024-11-27 13:02:31.568196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.254 [2024-11-27 13:02:31.568215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.254 [2024-11-27 13:02:31.568225] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.254 [2024-11-27 13:02:31.568234] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.254 [2024-11-27 13:02:31.578392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.254 qpair failed and we were unable to recover it. 00:24:05.254 [2024-11-27 13:02:31.588190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.254 [2024-11-27 13:02:31.588235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.254 [2024-11-27 13:02:31.588254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.254 [2024-11-27 13:02:31.588263] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.254 [2024-11-27 13:02:31.588272] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.254 [2024-11-27 13:02:31.598563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.254 qpair failed and we were unable to recover it. 00:24:05.254 [2024-11-27 13:02:31.608295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.254 [2024-11-27 13:02:31.608335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.254 [2024-11-27 13:02:31.608354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.254 [2024-11-27 13:02:31.608363] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.254 [2024-11-27 13:02:31.608372] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.254 [2024-11-27 13:02:31.618630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.254 qpair failed and we were unable to recover it. 00:24:05.254 [2024-11-27 13:02:31.628357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.254 [2024-11-27 13:02:31.628403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.254 [2024-11-27 13:02:31.628420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.254 [2024-11-27 13:02:31.628430] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.254 [2024-11-27 13:02:31.628439] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.512 [2024-11-27 13:02:31.638694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.512 qpair failed and we were unable to recover it. 00:24:05.512 [2024-11-27 13:02:31.648306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.512 [2024-11-27 13:02:31.648344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.512 [2024-11-27 13:02:31.648362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.512 [2024-11-27 13:02:31.648371] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.512 [2024-11-27 13:02:31.648380] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.512 [2024-11-27 13:02:31.658653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.512 qpair failed and we were unable to recover it. 00:24:05.512 [2024-11-27 13:02:31.668355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.512 [2024-11-27 13:02:31.668393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.512 [2024-11-27 13:02:31.668411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.512 [2024-11-27 13:02:31.668421] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.512 [2024-11-27 13:02:31.668430] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.512 [2024-11-27 13:02:31.678742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.512 qpair failed and we were unable to recover it. 00:24:05.512 [2024-11-27 13:02:31.688532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.512 [2024-11-27 13:02:31.688574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.512 [2024-11-27 13:02:31.688603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.512 [2024-11-27 13:02:31.688625] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.512 [2024-11-27 13:02:31.688634] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.512 [2024-11-27 13:02:31.698727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.512 qpair failed and we were unable to recover it. 00:24:05.512 [2024-11-27 13:02:31.708520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.512 [2024-11-27 13:02:31.708563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.512 [2024-11-27 13:02:31.708582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.512 [2024-11-27 13:02:31.708592] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.512 [2024-11-27 13:02:31.708601] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.512 [2024-11-27 13:02:31.718722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.512 qpair failed and we were unable to recover it. 00:24:05.512 [2024-11-27 13:02:31.728804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.512 [2024-11-27 13:02:31.728853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.512 [2024-11-27 13:02:31.728871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.512 [2024-11-27 13:02:31.728881] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.512 [2024-11-27 13:02:31.728889] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.512 [2024-11-27 13:02:31.738930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.512 qpair failed and we were unable to recover it. 00:24:05.512 [2024-11-27 13:02:31.748678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.513 [2024-11-27 13:02:31.748721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.513 [2024-11-27 13:02:31.748740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.513 [2024-11-27 13:02:31.748749] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.513 [2024-11-27 13:02:31.748758] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.513 [2024-11-27 13:02:31.758902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.513 qpair failed and we were unable to recover it. 00:24:05.513 [2024-11-27 13:02:31.768819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.513 [2024-11-27 13:02:31.768864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.513 [2024-11-27 13:02:31.768887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.513 [2024-11-27 13:02:31.768897] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.513 [2024-11-27 13:02:31.768906] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.513 [2024-11-27 13:02:31.779163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.513 qpair failed and we were unable to recover it. 00:24:05.513 [2024-11-27 13:02:31.788911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.513 [2024-11-27 13:02:31.788958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.513 [2024-11-27 13:02:31.788976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.513 [2024-11-27 13:02:31.788986] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.513 [2024-11-27 13:02:31.788994] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.513 [2024-11-27 13:02:31.799283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.513 qpair failed and we were unable to recover it. 00:24:05.513 [2024-11-27 13:02:31.808835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.513 [2024-11-27 13:02:31.808886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.513 [2024-11-27 13:02:31.808905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.513 [2024-11-27 13:02:31.808914] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.513 [2024-11-27 13:02:31.808923] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.513 [2024-11-27 13:02:31.819209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.513 qpair failed and we were unable to recover it. 00:24:05.513 [2024-11-27 13:02:31.828960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.513 [2024-11-27 13:02:31.828998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.513 [2024-11-27 13:02:31.829017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.513 [2024-11-27 13:02:31.829028] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.513 [2024-11-27 13:02:31.829037] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.513 [2024-11-27 13:02:31.839306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.513 qpair failed and we were unable to recover it. 00:24:05.513 [2024-11-27 13:02:31.848988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.513 [2024-11-27 13:02:31.849029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.513 [2024-11-27 13:02:31.849047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.513 [2024-11-27 13:02:31.849060] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.513 [2024-11-27 13:02:31.849069] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.513 [2024-11-27 13:02:31.859322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.513 qpair failed and we were unable to recover it. 00:24:05.513 [2024-11-27 13:02:31.868956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.513 [2024-11-27 13:02:31.868996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.513 [2024-11-27 13:02:31.869014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.513 [2024-11-27 13:02:31.869024] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.513 [2024-11-27 13:02:31.869033] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.513 [2024-11-27 13:02:31.879505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.513 qpair failed and we were unable to recover it. 00:24:05.513 [2024-11-27 13:02:31.889058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.513 [2024-11-27 13:02:31.889098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.513 [2024-11-27 13:02:31.889116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.513 [2024-11-27 13:02:31.889126] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.513 [2024-11-27 13:02:31.889135] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.771 [2024-11-27 13:02:31.899515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.771 qpair failed and we were unable to recover it. 00:24:05.771 [2024-11-27 13:02:31.909214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.771 [2024-11-27 13:02:31.909250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.771 [2024-11-27 13:02:31.909269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.771 [2024-11-27 13:02:31.909279] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.771 [2024-11-27 13:02:31.909287] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.771 [2024-11-27 13:02:31.919405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.771 qpair failed and we were unable to recover it. 00:24:05.771 [2024-11-27 13:02:31.929190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.771 [2024-11-27 13:02:31.929231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.771 [2024-11-27 13:02:31.929249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.771 [2024-11-27 13:02:31.929258] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.771 [2024-11-27 13:02:31.929267] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.771 [2024-11-27 13:02:31.939546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.771 qpair failed and we were unable to recover it. 00:24:05.771 [2024-11-27 13:02:31.949292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.771 [2024-11-27 13:02:31.949336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.771 [2024-11-27 13:02:31.949354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.771 [2024-11-27 13:02:31.949364] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.771 [2024-11-27 13:02:31.949372] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.771 [2024-11-27 13:02:31.959596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.771 qpair failed and we were unable to recover it. 00:24:05.771 [2024-11-27 13:02:31.969315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.771 [2024-11-27 13:02:31.969362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.771 [2024-11-27 13:02:31.969379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.771 [2024-11-27 13:02:31.969389] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.771 [2024-11-27 13:02:31.969398] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.771 [2024-11-27 13:02:31.979631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.771 qpair failed and we were unable to recover it. 00:24:05.771 [2024-11-27 13:02:31.989322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.771 [2024-11-27 13:02:31.989361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.771 [2024-11-27 13:02:31.989379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.771 [2024-11-27 13:02:31.989389] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.771 [2024-11-27 13:02:31.989398] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.771 [2024-11-27 13:02:31.999603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.771 qpair failed and we were unable to recover it. 00:24:05.771 [2024-11-27 13:02:32.009422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.771 [2024-11-27 13:02:32.009465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.771 [2024-11-27 13:02:32.009483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.771 [2024-11-27 13:02:32.009493] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.771 [2024-11-27 13:02:32.009502] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.771 [2024-11-27 13:02:32.019867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.771 qpair failed and we were unable to recover it. 00:24:05.771 [2024-11-27 13:02:32.029430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.771 [2024-11-27 13:02:32.029476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.771 [2024-11-27 13:02:32.029495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.771 [2024-11-27 13:02:32.029504] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.771 [2024-11-27 13:02:32.029514] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.771 [2024-11-27 13:02:32.039711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.771 qpair failed and we were unable to recover it. 00:24:05.771 [2024-11-27 13:02:32.049526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.771 [2024-11-27 13:02:32.049573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.771 [2024-11-27 13:02:32.049592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.771 [2024-11-27 13:02:32.049601] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.771 [2024-11-27 13:02:32.049615] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.771 [2024-11-27 13:02:32.059928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.771 qpair failed and we were unable to recover it. 00:24:05.771 [2024-11-27 13:02:32.069684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.771 [2024-11-27 13:02:32.069723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.771 [2024-11-27 13:02:32.069740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.771 [2024-11-27 13:02:32.069750] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.771 [2024-11-27 13:02:32.069759] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.771 [2024-11-27 13:02:32.079944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.771 qpair failed and we were unable to recover it. 00:24:05.771 [2024-11-27 13:02:32.089652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.771 [2024-11-27 13:02:32.089695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.771 [2024-11-27 13:02:32.089713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.771 [2024-11-27 13:02:32.089723] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.771 [2024-11-27 13:02:32.089731] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.771 [2024-11-27 13:02:32.099805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.771 qpair failed and we were unable to recover it. 00:24:05.771 [2024-11-27 13:02:32.109681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.771 [2024-11-27 13:02:32.109728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.771 [2024-11-27 13:02:32.109750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.771 [2024-11-27 13:02:32.109760] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.771 [2024-11-27 13:02:32.109768] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.771 [2024-11-27 13:02:32.120101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.771 qpair failed and we were unable to recover it. 00:24:05.771 [2024-11-27 13:02:32.129782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.771 [2024-11-27 13:02:32.129821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.771 [2024-11-27 13:02:32.129839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.771 [2024-11-27 13:02:32.129850] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.771 [2024-11-27 13:02:32.129858] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:05.771 [2024-11-27 13:02:32.140157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.771 qpair failed and we were unable to recover it. 00:24:05.771 [2024-11-27 13:02:32.149777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.772 [2024-11-27 13:02:32.149822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.772 [2024-11-27 13:02:32.149840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.772 [2024-11-27 13:02:32.149849] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.772 [2024-11-27 13:02:32.149858] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.029 [2024-11-27 13:02:32.160078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.029 qpair failed and we were unable to recover it. 00:24:06.029 [2024-11-27 13:02:32.169826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.029 [2024-11-27 13:02:32.169870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.029 [2024-11-27 13:02:32.169888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.029 [2024-11-27 13:02:32.169898] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.029 [2024-11-27 13:02:32.169907] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.029 [2024-11-27 13:02:32.180204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.029 qpair failed and we were unable to recover it. 00:24:06.029 [2024-11-27 13:02:32.189995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.029 [2024-11-27 13:02:32.190035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.029 [2024-11-27 13:02:32.190053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.029 [2024-11-27 13:02:32.190063] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.029 [2024-11-27 13:02:32.190075] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.029 [2024-11-27 13:02:32.200504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.029 qpair failed and we were unable to recover it. 00:24:06.029 [2024-11-27 13:02:32.210060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.029 [2024-11-27 13:02:32.210104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.029 [2024-11-27 13:02:32.210122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.029 [2024-11-27 13:02:32.210131] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.029 [2024-11-27 13:02:32.210140] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.029 [2024-11-27 13:02:32.220389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.029 qpair failed and we were unable to recover it. 00:24:06.029 [2024-11-27 13:02:32.230112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.029 [2024-11-27 13:02:32.230160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.029 [2024-11-27 13:02:32.230179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.029 [2024-11-27 13:02:32.230189] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.029 [2024-11-27 13:02:32.230198] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.029 [2024-11-27 13:02:32.240511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.029 qpair failed and we were unable to recover it. 00:24:06.029 [2024-11-27 13:02:32.250231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.029 [2024-11-27 13:02:32.250272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.029 [2024-11-27 13:02:32.250290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.029 [2024-11-27 13:02:32.250300] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.029 [2024-11-27 13:02:32.250309] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.029 [2024-11-27 13:02:32.260540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.029 qpair failed and we were unable to recover it. 00:24:06.029 [2024-11-27 13:02:32.270219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.029 [2024-11-27 13:02:32.270261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.029 [2024-11-27 13:02:32.270280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.029 [2024-11-27 13:02:32.270290] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.029 [2024-11-27 13:02:32.270299] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.029 [2024-11-27 13:02:32.280441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.029 qpair failed and we were unable to recover it. 00:24:06.029 [2024-11-27 13:02:32.290321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.029 [2024-11-27 13:02:32.290363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.029 [2024-11-27 13:02:32.290382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.029 [2024-11-27 13:02:32.290392] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.029 [2024-11-27 13:02:32.290401] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.029 [2024-11-27 13:02:32.300644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.029 qpair failed and we were unable to recover it. 00:24:06.029 [2024-11-27 13:02:32.310279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.029 [2024-11-27 13:02:32.310319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.029 [2024-11-27 13:02:32.310338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.029 [2024-11-27 13:02:32.310348] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.029 [2024-11-27 13:02:32.310357] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.029 [2024-11-27 13:02:32.320535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.029 qpair failed and we were unable to recover it. 00:24:06.029 [2024-11-27 13:02:32.330392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.029 [2024-11-27 13:02:32.330433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.029 [2024-11-27 13:02:32.330451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.029 [2024-11-27 13:02:32.330461] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.029 [2024-11-27 13:02:32.330469] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.029 [2024-11-27 13:02:32.340591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.029 qpair failed and we were unable to recover it. 00:24:06.029 [2024-11-27 13:02:32.350385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.029 [2024-11-27 13:02:32.350430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.029 [2024-11-27 13:02:32.350448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.029 [2024-11-27 13:02:32.350457] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.029 [2024-11-27 13:02:32.350466] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.029 [2024-11-27 13:02:32.360838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.029 qpair failed and we were unable to recover it. 00:24:06.029 [2024-11-27 13:02:32.370464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.029 [2024-11-27 13:02:32.370509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.029 [2024-11-27 13:02:32.370527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.029 [2024-11-27 13:02:32.370536] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.029 [2024-11-27 13:02:32.370545] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.029 [2024-11-27 13:02:32.380697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.029 qpair failed and we were unable to recover it. 00:24:06.029 [2024-11-27 13:02:32.390415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.030 [2024-11-27 13:02:32.390457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.030 [2024-11-27 13:02:32.390477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.030 [2024-11-27 13:02:32.390487] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.030 [2024-11-27 13:02:32.390497] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.030 [2024-11-27 13:02:32.400783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.030 qpair failed and we were unable to recover it. 00:24:06.030 [2024-11-27 13:02:32.410552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.030 [2024-11-27 13:02:32.410596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.030 [2024-11-27 13:02:32.410627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.030 [2024-11-27 13:02:32.410638] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.030 [2024-11-27 13:02:32.410647] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.287 [2024-11-27 13:02:32.420734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.287 qpair failed and we were unable to recover it. 00:24:06.287 [2024-11-27 13:02:32.430662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.287 [2024-11-27 13:02:32.430705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.287 [2024-11-27 13:02:32.430723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.287 [2024-11-27 13:02:32.430732] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.287 [2024-11-27 13:02:32.430741] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.287 [2024-11-27 13:02:32.441149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.287 qpair failed and we were unable to recover it. 00:24:06.287 [2024-11-27 13:02:32.450735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.287 [2024-11-27 13:02:32.450772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.287 [2024-11-27 13:02:32.450794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.287 [2024-11-27 13:02:32.450804] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.287 [2024-11-27 13:02:32.450812] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.287 [2024-11-27 13:02:32.460808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.287 qpair failed and we were unable to recover it. 00:24:06.287 [2024-11-27 13:02:32.470689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.287 [2024-11-27 13:02:32.470728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.287 [2024-11-27 13:02:32.470746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.287 [2024-11-27 13:02:32.470756] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.287 [2024-11-27 13:02:32.470765] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.287 [2024-11-27 13:02:32.481034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.287 qpair failed and we were unable to recover it. 00:24:06.287 [2024-11-27 13:02:32.490862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.287 [2024-11-27 13:02:32.490904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.287 [2024-11-27 13:02:32.490922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.287 [2024-11-27 13:02:32.490931] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.287 [2024-11-27 13:02:32.490940] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.287 [2024-11-27 13:02:32.501224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.287 qpair failed and we were unable to recover it. 00:24:06.287 [2024-11-27 13:02:32.510849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.287 [2024-11-27 13:02:32.510894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.287 [2024-11-27 13:02:32.510912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.287 [2024-11-27 13:02:32.510922] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.287 [2024-11-27 13:02:32.510931] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.287 [2024-11-27 13:02:32.521133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.287 qpair failed and we were unable to recover it. 00:24:06.287 [2024-11-27 13:02:32.530857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.287 [2024-11-27 13:02:32.530895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.287 [2024-11-27 13:02:32.530913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.287 [2024-11-27 13:02:32.530923] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.287 [2024-11-27 13:02:32.530935] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.287 [2024-11-27 13:02:32.541289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.287 qpair failed and we were unable to recover it. 00:24:06.287 [2024-11-27 13:02:32.550756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.287 [2024-11-27 13:02:32.550797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.287 [2024-11-27 13:02:32.550815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.287 [2024-11-27 13:02:32.550825] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.287 [2024-11-27 13:02:32.550834] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.287 [2024-11-27 13:02:32.561304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.287 qpair failed and we were unable to recover it. 00:24:06.287 [2024-11-27 13:02:32.571081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.287 [2024-11-27 13:02:32.571124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.287 [2024-11-27 13:02:32.571142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.287 [2024-11-27 13:02:32.571152] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.287 [2024-11-27 13:02:32.571160] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:06.287 [2024-11-27 13:02:32.581285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.287 qpair failed and we were unable to recover it. 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Read completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Read completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Read completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Read completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Read completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Read completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Read completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Read completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Read completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Write completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Read completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Read completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Read completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 Read completed with error (sct=0, sc=8) 00:24:07.220 starting I/O failed 00:24:07.220 [2024-11-27 13:02:33.586370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:07.220 [2024-11-27 13:02:33.593767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.220 [2024-11-27 13:02:33.593818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.220 [2024-11-27 13:02:33.593837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.220 [2024-11-27 13:02:33.593847] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.220 [2024-11-27 13:02:33.593856] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cb980 00:24:07.478 [2024-11-27 13:02:33.604305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:07.478 qpair failed and we were unable to recover it. 00:24:07.478 [2024-11-27 13:02:33.614171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.478 [2024-11-27 13:02:33.614211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.478 [2024-11-27 13:02:33.614229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.479 [2024-11-27 13:02:33.614239] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.479 [2024-11-27 13:02:33.614248] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cb980 00:24:07.479 [2024-11-27 13:02:33.624461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:07.479 qpair failed and we were unable to recover it. 00:24:07.479 [2024-11-27 13:02:33.624595] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:24:07.479 A controller has encountered a failure and is being reset. 00:24:07.479 [2024-11-27 13:02:33.634151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.479 [2024-11-27 13:02:33.634192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.479 [2024-11-27 13:02:33.634219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.479 [2024-11-27 13:02:33.634233] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.479 [2024-11-27 13:02:33.634246] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:07.479 [2024-11-27 13:02:33.644513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:07.479 qpair failed and we were unable to recover it. 00:24:07.479 [2024-11-27 13:02:33.654257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.479 [2024-11-27 13:02:33.654300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.479 [2024-11-27 13:02:33.654321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.479 [2024-11-27 13:02:33.654332] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.479 [2024-11-27 13:02:33.654340] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:07.479 [2024-11-27 13:02:33.664637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:07.479 qpair failed and we were unable to recover it. 00:24:07.479 [2024-11-27 13:02:33.674239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.479 [2024-11-27 13:02:33.674282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.479 [2024-11-27 13:02:33.674301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.479 [2024-11-27 13:02:33.674310] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.479 [2024-11-27 13:02:33.674319] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:07.479 [2024-11-27 13:02:33.684446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:07.479 qpair failed and we were unable to recover it. 00:24:07.479 [2024-11-27 13:02:33.684626] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:07.479 [2024-11-27 13:02:33.717784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:24:07.479 Controller properly reset. 00:24:07.479 Initializing NVMe Controllers 00:24:07.479 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:07.479 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:07.479 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:07.479 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:07.479 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:07.479 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:07.479 Initialization complete. Launching workers. 00:24:07.479 Starting thread on core 1 00:24:07.479 Starting thread on core 2 00:24:07.479 Starting thread on core 3 00:24:07.479 Starting thread on core 0 00:24:07.479 13:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:07.479 00:24:07.479 real 0m12.644s 00:24:07.479 user 0m27.458s 00:24:07.479 sys 0m3.118s 00:24:07.479 13:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:07.479 13:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:07.479 ************************************ 00:24:07.479 END TEST nvmf_target_disconnect_tc2 00:24:07.479 ************************************ 00:24:07.479 13:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:24:07.479 13:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:24:07.479 13:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:07.479 13:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:07.479 13:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:07.738 ************************************ 00:24:07.738 START TEST nvmf_target_disconnect_tc3 00:24:07.738 ************************************ 00:24:07.738 13:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:24:07.738 13:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=97939 00:24:07.738 13:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:24:07.738 13:02:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:24:09.638 13:02:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 96750 00:24:09.638 13:02:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Read completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Read completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Read completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Read completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Read completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Read completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Read completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Read completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Read completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Read completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Read completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Read completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 Write completed with error (sct=0, sc=8) 00:24:11.015 starting I/O failed 00:24:11.015 [2024-11-27 13:02:37.081501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.582 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 96750 Killed "${NVMF_APP[@]}" "$@" 00:24:11.582 13:02:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:24:11.582 13:02:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:11.582 13:02:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:11.582 13:02:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:11.582 13:02:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:11.582 13:02:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=98660 00:24:11.582 13:02:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 98660 00:24:11.582 13:02:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:11.582 13:02:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 98660 ']' 00:24:11.582 13:02:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.582 13:02:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.582 13:02:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.582 13:02:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.582 13:02:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:11.582 [2024-11-27 13:02:37.939550] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:24:11.582 [2024-11-27 13:02:37.939604] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.840 [2024-11-27 13:02:38.045334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:11.840 [2024-11-27 13:02:38.083640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.840 [2024-11-27 13:02:38.083682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.840 [2024-11-27 13:02:38.083692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.840 [2024-11-27 13:02:38.083700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.840 [2024-11-27 13:02:38.083707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.840 Write completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Write completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Write completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Write completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Write completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Write completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Write completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Write completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Write completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Write completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Write completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Write completed with error (sct=0, sc=8) 00:24:11.840 starting I/O failed 00:24:11.840 Read completed with error (sct=0, sc=8) 00:24:11.841 starting I/O failed 00:24:11.841 Write completed with error (sct=0, sc=8) 00:24:11.841 starting I/O failed 00:24:11.841 [2024-11-27 13:02:38.085405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:11.841 [2024-11-27 13:02:38.085516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:11.841 [2024-11-27 13:02:38.085642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:11.841 [2024-11-27 13:02:38.085643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:24:11.841 [2024-11-27 13:02:38.086625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:24:12.408 13:02:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.408 13:02:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:12.408 13:02:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:12.408 13:02:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:12.408 13:02:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.666 13:02:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.666 13:02:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:12.666 13:02:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.666 13:02:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.666 Malloc0 00:24:12.666 13:02:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.666 13:02:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:12.666 13:02:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.666 13:02:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.666 [2024-11-27 13:02:38.892854] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xedfc30/0xeebc00) succeed. 00:24:12.666 [2024-11-27 13:02:38.902416] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xee12c0/0xf2d2a0) succeed. 00:24:12.666 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.666 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:12.666 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.666 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.666 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.666 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:12.666 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.666 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.666 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.666 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:24:12.666 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.666 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.666 [2024-11-27 13:02:39.046710] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:24:12.925 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.925 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:24:12.925 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.925 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.925 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.925 13:02:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 97939 00:24:12.925 Read completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Read completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Read completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Read completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Read completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Read completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Read completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Read completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Read completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Read completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Read completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Read completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 Write completed with error (sct=0, sc=8) 00:24:12.925 starting I/O failed 00:24:12.925 [2024-11-27 13:02:39.091672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.925 [2024-11-27 13:02:39.093358] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:12.925 [2024-11-27 13:02:39.093381] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:12.925 [2024-11-27 13:02:39.093389] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:13.859 [2024-11-27 13:02:40.097183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.859 qpair failed and we were unable to recover it. 00:24:13.859 [2024-11-27 13:02:40.098796] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:13.859 [2024-11-27 13:02:40.098820] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:13.859 [2024-11-27 13:02:40.098831] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:14.793 [2024-11-27 13:02:41.102714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:24:14.793 qpair failed and we were unable to recover it. 00:24:14.793 [2024-11-27 13:02:41.104183] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:14.793 [2024-11-27 13:02:41.104202] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:14.793 [2024-11-27 13:02:41.104210] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:15.728 [2024-11-27 13:02:42.108102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:24:15.728 qpair failed and we were unable to recover it. 00:24:15.728 [2024-11-27 13:02:42.109532] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:15.728 [2024-11-27 13:02:42.109551] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:15.728 [2024-11-27 13:02:42.109559] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:17.102 [2024-11-27 13:02:43.113474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:24:17.102 qpair failed and we were unable to recover it. 00:24:17.102 [2024-11-27 13:02:43.114966] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:17.102 [2024-11-27 13:02:43.114984] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:17.102 [2024-11-27 13:02:43.114992] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:18.037 [2024-11-27 13:02:44.118915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:24:18.037 qpair failed and we were unable to recover it. 00:24:18.037 [2024-11-27 13:02:44.120371] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:18.037 [2024-11-27 13:02:44.120390] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:18.037 [2024-11-27 13:02:44.120398] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:18.971 [2024-11-27 13:02:45.124255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:24:18.971 qpair failed and we were unable to recover it. 00:24:18.971 [2024-11-27 13:02:45.125742] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:18.971 [2024-11-27 13:02:45.125762] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:18.971 [2024-11-27 13:02:45.125770] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:19.906 [2024-11-27 13:02:46.129523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-11-27 13:02:46.131131] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:19.906 [2024-11-27 13:02:46.131154] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:19.906 [2024-11-27 13:02:46.131163] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:20.840 [2024-11-27 13:02:47.135092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:24:20.840 qpair failed and we were unable to recover it. 00:24:20.840 [2024-11-27 13:02:47.136562] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:20.840 [2024-11-27 13:02:47.136581] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:20.840 [2024-11-27 13:02:47.136590] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:21.774 [2024-11-27 13:02:48.140295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:24:21.775 qpair failed and we were unable to recover it. 00:24:21.775 [2024-11-27 13:02:48.140406] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:24:21.775 A controller has encountered a failure and is being reset. 00:24:21.775 Resorting to new failover address 192.168.100.9 00:24:21.775 [2024-11-27 13:02:48.141952] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:21.775 [2024-11-27 13:02:48.141980] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:21.775 [2024-11-27 13:02:48.141993] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:23.148 [2024-11-27 13:02:49.145896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:24:23.148 qpair failed and we were unable to recover it. 00:24:23.148 [2024-11-27 13:02:49.147350] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:23.148 [2024-11-27 13:02:49.147369] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:23.148 [2024-11-27 13:02:49.147377] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:24.109 [2024-11-27 13:02:50.151102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:24:24.109 qpair failed and we were unable to recover it. 00:24:24.109 [2024-11-27 13:02:50.151210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:24.109 [2024-11-27 13:02:50.151319] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:24.109 [2024-11-27 13:02:50.153133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:24:24.109 Controller properly reset. 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Write completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Write completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Write completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Write completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Write completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Write completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Write completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Write completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Write completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Write completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Write completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Write completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Write completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Write completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 Read completed with error (sct=0, sc=8) 00:24:24.823 starting I/O failed 00:24:24.823 [2024-11-27 13:02:51.200942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:25.106 Initializing NVMe Controllers 00:24:25.106 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:25.106 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:25.106 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:25.106 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:25.106 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:25.106 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:25.106 Initialization complete. Launching workers. 00:24:25.106 Starting thread on core 1 00:24:25.106 Starting thread on core 2 00:24:25.106 Starting thread on core 3 00:24:25.106 Starting thread on core 0 00:24:25.106 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:24:25.106 00:24:25.106 real 0m17.378s 00:24:25.106 user 1m0.147s 00:24:25.106 sys 0m5.324s 00:24:25.106 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.106 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:25.106 ************************************ 00:24:25.106 END TEST nvmf_target_disconnect_tc3 00:24:25.106 ************************************ 00:24:25.106 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:25.106 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:24:25.106 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:25.106 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:24:25.106 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:25.106 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:25.106 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:24:25.106 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:25.106 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:25.106 rmmod nvme_rdma 00:24:25.106 rmmod nvme_fabrics 00:24:25.106 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:25.106 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:24:25.107 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:24:25.107 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 98660 ']' 00:24:25.107 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 98660 00:24:25.107 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 98660 ']' 00:24:25.107 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 98660 00:24:25.107 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:24:25.107 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.107 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98660 00:24:25.107 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:24:25.107 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:24:25.107 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98660' 00:24:25.107 killing process with pid 98660 00:24:25.107 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 98660 00:24:25.107 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 98660 00:24:25.365 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:25.365 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:25.365 00:24:25.365 real 0m40.482s 00:24:25.365 user 2m24.789s 00:24:25.365 sys 0m15.806s 00:24:25.365 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.365 13:02:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:25.365 ************************************ 00:24:25.365 END TEST nvmf_target_disconnect 00:24:25.365 ************************************ 00:24:25.365 13:02:51 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:25.365 00:24:25.365 real 5m51.659s 00:24:25.365 user 13m11.448s 00:24:25.366 sys 1m57.663s 00:24:25.366 13:02:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.366 13:02:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.366 ************************************ 00:24:25.366 END TEST nvmf_host 00:24:25.366 ************************************ 00:24:25.625 13:02:51 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:24:25.625 00:24:25.625 real 18m47.162s 00:24:25.625 user 43m12.632s 00:24:25.625 sys 6m28.358s 00:24:25.625 13:02:51 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.625 13:02:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:25.625 ************************************ 00:24:25.625 END TEST nvmf_rdma 00:24:25.625 ************************************ 00:24:25.625 13:02:51 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:25.625 13:02:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:25.625 13:02:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:25.625 13:02:51 -- common/autotest_common.sh@10 -- # set +x 00:24:25.625 ************************************ 00:24:25.625 START TEST spdkcli_nvmf_rdma 00:24:25.625 ************************************ 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:25.625 * Looking for test storage... 00:24:25.625 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:25.625 13:02:51 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:24:25.625 13:02:52 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:24:25.625 13:02:52 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:25.625 13:02:52 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:24:25.625 13:02:52 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:25.625 13:02:52 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:24:25.625 13:02:52 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:24:25.625 13:02:52 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:25.625 13:02:52 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:24:25.625 13:02:52 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:25.625 13:02:52 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:25.625 13:02:52 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:25.625 13:02:52 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:25.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.885 --rc genhtml_branch_coverage=1 00:24:25.885 --rc genhtml_function_coverage=1 00:24:25.885 --rc genhtml_legend=1 00:24:25.885 --rc geninfo_all_blocks=1 00:24:25.885 --rc geninfo_unexecuted_blocks=1 00:24:25.885 00:24:25.885 ' 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:25.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.885 --rc genhtml_branch_coverage=1 00:24:25.885 --rc genhtml_function_coverage=1 00:24:25.885 --rc genhtml_legend=1 00:24:25.885 --rc geninfo_all_blocks=1 00:24:25.885 --rc geninfo_unexecuted_blocks=1 00:24:25.885 00:24:25.885 ' 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:25.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.885 --rc genhtml_branch_coverage=1 00:24:25.885 --rc genhtml_function_coverage=1 00:24:25.885 --rc genhtml_legend=1 00:24:25.885 --rc geninfo_all_blocks=1 00:24:25.885 --rc geninfo_unexecuted_blocks=1 00:24:25.885 00:24:25.885 ' 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:25.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.885 --rc genhtml_branch_coverage=1 00:24:25.885 --rc genhtml_function_coverage=1 00:24:25.885 --rc genhtml_legend=1 00:24:25.885 --rc geninfo_all_blocks=1 00:24:25.885 --rc geninfo_unexecuted_blocks=1 00:24:25.885 00:24:25.885 ' 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:25.885 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:25.885 13:02:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:25.886 13:02:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:25.886 13:02:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:25.886 13:02:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:25.886 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:25.886 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:25.886 13:02:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:25.886 13:02:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=101180 00:24:25.886 13:02:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 101180 00:24:25.886 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 101180 ']' 00:24:25.886 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.886 13:02:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:25.886 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.886 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.886 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.886 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:25.886 [2024-11-27 13:02:52.095661] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:24:25.886 [2024-11-27 13:02:52.095716] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101180 ] 00:24:25.886 [2024-11-27 13:02:52.184817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:25.886 [2024-11-27 13:02:52.226977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.886 [2024-11-27 13:02:52.226980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:24:26.821 13:02:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:36.795 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:36.796 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:36.796 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:36.796 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:36.796 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:36.796 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:36.796 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:36.796 altname enp217s0f0np0 00:24:36.796 altname ens818f0np0 00:24:36.796 inet 192.168.100.8/24 scope global mlx_0_0 00:24:36.796 valid_lft forever preferred_lft forever 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:36.796 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:36.797 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:36.797 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:36.797 altname enp217s0f1np1 00:24:36.797 altname ens818f1np1 00:24:36.797 inet 192.168.100.9/24 scope global mlx_0_1 00:24:36.797 valid_lft forever preferred_lft forever 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:24:36.797 192.168.100.9' 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:24:36.797 192.168.100.9' 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:24:36.797 192.168.100.9' 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:36.797 13:03:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:36.797 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:36.797 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:36.797 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:36.797 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:36.797 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:36.797 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:36.797 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:36.797 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:36.797 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:36.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:36.797 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:36.797 ' 00:24:38.174 [2024-11-27 13:03:04.307934] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1645a60/0x16536d0) succeed. 00:24:38.174 [2024-11-27 13:03:04.317640] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1647140/0x16d3740) succeed. 00:24:39.550 [2024-11-27 13:03:05.720258] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:24:42.085 [2024-11-27 13:03:08.204027] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:24:44.621 [2024-11-27 13:03:10.387139] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:24:45.997 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:45.997 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:45.997 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:45.997 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:45.997 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:45.997 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:45.997 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:45.997 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:45.997 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:45.997 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:45.997 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:45.997 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:45.997 13:03:12 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:45.997 13:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:45.997 13:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:45.997 13:03:12 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:45.997 13:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:45.997 13:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:45.997 13:03:12 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:24:45.997 13:03:12 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:24:46.256 13:03:12 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:46.256 13:03:12 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:46.256 13:03:12 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:46.256 13:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:46.256 13:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:46.514 13:03:12 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:46.514 13:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:46.514 13:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:46.514 13:03:12 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:46.514 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:46.514 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:46.514 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:46.514 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:24:46.514 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:24:46.514 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:46.514 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:46.514 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:46.514 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:46.514 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:46.514 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:46.514 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:46.514 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:46.514 ' 00:24:51.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:24:51.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:24:51.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:51.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:24:51.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:24:51.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:24:51.782 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:24:51.782 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:51.782 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:24:51.782 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:24:51.782 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:24:51.782 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:24:51.782 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:24:51.782 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:24:51.782 13:03:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:24:51.782 13:03:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:51.782 13:03:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:51.782 13:03:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 101180 00:24:51.782 13:03:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 101180 ']' 00:24:51.782 13:03:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 101180 00:24:51.782 13:03:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:24:51.782 13:03:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.782 13:03:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101180 00:24:51.782 13:03:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:51.782 13:03:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:51.782 13:03:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101180' 00:24:51.782 killing process with pid 101180 00:24:51.782 13:03:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 101180 00:24:51.782 13:03:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 101180 00:24:51.782 13:03:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:24:51.782 13:03:18 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:51.782 13:03:18 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:24:51.782 13:03:18 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:51.782 13:03:18 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:51.782 13:03:18 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:24:51.782 13:03:18 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:51.782 13:03:18 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:51.782 rmmod nvme_rdma 00:24:51.782 rmmod nvme_fabrics 00:24:51.782 13:03:18 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:51.782 13:03:18 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:24:51.782 13:03:18 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:24:51.782 13:03:18 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:51.782 13:03:18 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:51.782 13:03:18 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:51.782 00:24:51.782 real 0m26.330s 00:24:51.782 user 0m57.687s 00:24:51.782 sys 0m7.517s 00:24:51.782 13:03:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:51.782 13:03:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:51.782 ************************************ 00:24:51.782 END TEST spdkcli_nvmf_rdma 00:24:51.782 ************************************ 00:24:52.041 13:03:18 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:52.041 13:03:18 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:52.041 13:03:18 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:52.041 13:03:18 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:24:52.041 13:03:18 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:52.041 13:03:18 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:52.041 13:03:18 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:52.041 13:03:18 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:52.041 13:03:18 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:52.041 13:03:18 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:52.041 13:03:18 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:24:52.041 13:03:18 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:52.041 13:03:18 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:52.041 13:03:18 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:24:52.041 13:03:18 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:24:52.041 13:03:18 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:24:52.041 13:03:18 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:24:52.041 13:03:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:52.041 13:03:18 -- common/autotest_common.sh@10 -- # set +x 00:24:52.041 13:03:18 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:24:52.041 13:03:18 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:24:52.041 13:03:18 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:24:52.041 13:03:18 -- common/autotest_common.sh@10 -- # set +x 00:24:58.608 INFO: APP EXITING 00:24:58.608 INFO: killing all VMs 00:24:58.608 INFO: killing vhost app 00:24:58.608 INFO: EXIT DONE 00:25:01.900 Waiting for block devices as requested 00:25:02.160 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:02.160 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:02.160 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:02.419 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:02.419 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:02.419 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:02.419 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:02.679 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:02.679 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:02.679 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:02.939 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:02.939 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:02.939 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:03.198 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:03.198 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:03.198 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:03.457 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:25:07.653 Cleaning 00:25:07.654 Removing: /var/run/dpdk/spdk0/config 00:25:07.654 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:07.654 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:07.654 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:07.654 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:07.654 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:25:07.654 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:25:07.654 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:25:07.654 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:25:07.654 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:07.654 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:07.654 Removing: /var/run/dpdk/spdk1/config 00:25:07.654 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:07.654 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:07.654 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:07.654 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:07.654 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:25:07.654 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:25:07.654 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:25:07.654 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:25:07.654 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:07.654 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:07.654 Removing: /var/run/dpdk/spdk1/mp_socket 00:25:07.654 Removing: /var/run/dpdk/spdk2/config 00:25:07.654 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:07.654 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:07.654 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:07.654 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:07.654 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:25:07.654 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:25:07.654 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:25:07.654 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:25:07.654 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:07.654 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:07.654 Removing: /var/run/dpdk/spdk3/config 00:25:07.654 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:07.654 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:07.654 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:07.654 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:07.654 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:25:07.654 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:25:07.654 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:25:07.654 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:25:07.654 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:07.654 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:07.654 Removing: /var/run/dpdk/spdk4/config 00:25:07.654 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:07.654 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:07.654 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:07.654 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:07.654 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:25:07.654 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:25:07.654 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:25:07.654 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:25:07.654 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:07.654 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:07.654 Removing: /dev/shm/bdevperf_trace.pid4010063 00:25:07.654 Removing: /dev/shm/bdev_svc_trace.1 00:25:07.654 Removing: /dev/shm/nvmf_trace.0 00:25:07.654 Removing: /dev/shm/spdk_tgt_trace.pid3960761 00:25:07.654 Removing: /var/run/dpdk/spdk0 00:25:07.654 Removing: /var/run/dpdk/spdk1 00:25:07.654 Removing: /var/run/dpdk/spdk2 00:25:07.654 Removing: /var/run/dpdk/spdk3 00:25:07.654 Removing: /var/run/dpdk/spdk4 00:25:07.654 Removing: /var/run/dpdk/spdk_pid101180 00:25:07.654 Removing: /var/run/dpdk/spdk_pid17423 00:25:07.654 Removing: /var/run/dpdk/spdk_pid18235 00:25:07.654 Removing: /var/run/dpdk/spdk_pid19283 00:25:07.654 Removing: /var/run/dpdk/spdk_pid20098 00:25:07.654 Removing: /var/run/dpdk/spdk_pid20616 00:25:07.654 Removing: /var/run/dpdk/spdk_pid25880 00:25:07.654 Removing: /var/run/dpdk/spdk_pid25885 00:25:07.654 Removing: /var/run/dpdk/spdk_pid31172 00:25:07.654 Removing: /var/run/dpdk/spdk_pid31712 00:25:07.654 Removing: /var/run/dpdk/spdk_pid32274 00:25:07.654 Removing: /var/run/dpdk/spdk_pid33043 00:25:07.654 Removing: /var/run/dpdk/spdk_pid33126 00:25:07.654 Removing: /var/run/dpdk/spdk_pid38849 00:25:07.654 Removing: /var/run/dpdk/spdk_pid39407 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3958004 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3959273 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3960761 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3961473 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3962321 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3962597 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3963711 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3963721 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3964113 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3969962 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3971421 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3971745 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3972081 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3972424 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3972763 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3973046 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3973327 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3973653 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3974458 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3977676 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3977975 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3978276 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3978541 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3979107 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3979129 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3979699 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3979963 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3980255 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3980282 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3980568 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3980812 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3981220 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3981506 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3981833 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3986732 00:25:07.654 Removing: /var/run/dpdk/spdk_pid3991552 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4003434 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4004261 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4010063 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4010417 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4015576 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4022274 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4025006 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4036976 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4066363 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4071017 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4120749 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4126830 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4133563 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4143915 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4185507 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4186520 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4187687 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4188922 00:25:07.654 Removing: /var/run/dpdk/spdk_pid4194264 00:25:07.654 Removing: /var/run/dpdk/spdk_pid44378 00:25:07.654 Removing: /var/run/dpdk/spdk_pid47292 00:25:07.654 Removing: /var/run/dpdk/spdk_pid53780 00:25:07.654 Removing: /var/run/dpdk/spdk_pid65557 00:25:07.654 Removing: /var/run/dpdk/spdk_pid65632 00:25:07.654 Removing: /var/run/dpdk/spdk_pid8754 00:25:07.654 Removing: /var/run/dpdk/spdk_pid88507 00:25:07.654 Removing: /var/run/dpdk/spdk_pid88776 00:25:07.654 Removing: /var/run/dpdk/spdk_pid95617 00:25:07.654 Removing: /var/run/dpdk/spdk_pid95959 00:25:07.654 Removing: /var/run/dpdk/spdk_pid97939 00:25:07.654 Clean 00:25:07.913 13:03:34 -- common/autotest_common.sh@1453 -- # return 0 00:25:07.913 13:03:34 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:25:07.913 13:03:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:07.913 13:03:34 -- common/autotest_common.sh@10 -- # set +x 00:25:07.913 13:03:34 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:25:07.913 13:03:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:07.913 13:03:34 -- common/autotest_common.sh@10 -- # set +x 00:25:07.913 13:03:34 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:07.913 13:03:34 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:25:07.913 13:03:34 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:25:07.913 13:03:34 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:25:07.913 13:03:34 -- spdk/autotest.sh@398 -- # hostname 00:25:07.913 13:03:34 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:25:08.172 geninfo: WARNING: invalid characters removed from testname! 00:25:30.092 13:03:54 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:31.031 13:03:57 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:32.411 13:03:58 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:34.315 13:04:00 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:36.224 13:04:02 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:37.602 13:04:03 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:39.508 13:04:05 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:39.508 13:04:05 -- spdk/autorun.sh@1 -- $ timing_finish 00:25:39.508 13:04:05 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:25:39.508 13:04:05 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:39.508 13:04:05 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:39.508 13:04:05 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:39.508 + [[ -n 3877277 ]] 00:25:39.508 + sudo kill 3877277 00:25:39.518 [Pipeline] } 00:25:39.539 [Pipeline] // stage 00:25:39.545 [Pipeline] } 00:25:39.565 [Pipeline] // timeout 00:25:39.571 [Pipeline] } 00:25:39.590 [Pipeline] // catchError 00:25:39.598 [Pipeline] } 00:25:39.612 [Pipeline] // wrap 00:25:39.618 [Pipeline] } 00:25:39.629 [Pipeline] // catchError 00:25:39.638 [Pipeline] stage 00:25:39.640 [Pipeline] { (Epilogue) 00:25:39.652 [Pipeline] catchError 00:25:39.654 [Pipeline] { 00:25:39.666 [Pipeline] echo 00:25:39.668 Cleanup processes 00:25:39.674 [Pipeline] sh 00:25:39.961 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:39.961 120718 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:39.978 [Pipeline] sh 00:25:40.271 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:40.271 ++ grep -v 'sudo pgrep' 00:25:40.271 ++ awk '{print $1}' 00:25:40.271 + sudo kill -9 00:25:40.271 + true 00:25:40.284 [Pipeline] sh 00:25:40.569 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:40.569 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:25:44.762 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:25:48.971 [Pipeline] sh 00:25:49.257 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:49.257 Artifacts sizes are good 00:25:49.272 [Pipeline] archiveArtifacts 00:25:49.281 Archiving artifacts 00:25:49.436 [Pipeline] sh 00:25:49.854 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:25:49.867 [Pipeline] cleanWs 00:25:49.877 [WS-CLEANUP] Deleting project workspace... 00:25:49.877 [WS-CLEANUP] Deferred wipeout is used... 00:25:49.884 [WS-CLEANUP] done 00:25:49.886 [Pipeline] } 00:25:49.907 [Pipeline] // catchError 00:25:49.921 [Pipeline] sh 00:25:50.202 + logger -p user.info -t JENKINS-CI 00:25:50.212 [Pipeline] } 00:25:50.229 [Pipeline] // stage 00:25:50.235 [Pipeline] } 00:25:50.249 [Pipeline] // node 00:25:50.253 [Pipeline] End of Pipeline 00:25:50.291 Finished: SUCCESS